Apologies for the blank message (if it came through). I have heard here and there that there might be in development a plan to make it such that a raid-z can grow its "raid-z''ness" to accommodate a new disk added to it. Example: I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on space, and would like to add a 5th disk. The idea is to pop in disk 5 and have the raid-z expand its feature set and free space to incorporate the 5th disk. Is there indeed such a thing in the works? Or in consideration?
As has been mentioned on this forum, this would require a significant change to the way RAID-Z works. To my knowledge there is no such project at present. Do you have a use case where this is required? Adam On Sat, Jul 07, 2007 at 03:37:19PM -0400, Echo B wrote:> Apologies for the blank message (if it came through). > > I have heard here and there that there might be in development a plan > to make it such that a raid-z can grow its "raid-z''ness" to > accommodate a new disk added to it. > Example: > I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on > space, and would like to add a 5th disk. The idea is to pop in disk 5 > and have the raid-z expand its feature set and free space to > incorporate the 5th disk. > > Is there indeed such a thing in the works? Or in consideration? > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code-- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
> As has been mentioned on this forum, this would > require a significant change > to the way RAID-Z works. To my knowledge there is no > such project at present. > Do you have a use case where this is required? > > Adam > > On Sat, Jul 07, 2007 at 03:37:19PM -0400, Echo B > wrote: > > Apologies for the blank message (if it came > through). > > > > I have heard here and there that there might be in > development a plan > > to make it such that a raid-z can grow its > "raid-z''ness" to > > accommodate a new disk added to it. > > Example: > > I have 4Disks in a raid-z[12] configuration. I am > uncomfortably low on > > space, and would like to add a 5th disk. The idea > is to pop in disk 5 > > and have the raid-z expand its feature set and free > space to > > incorporate the 5th disk. > > > > Is there indeed such a thing in the works? Or in > consideration? > > _______________________________________________ > > zfs-code mailing list > > zfs-code at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-code > > -- > Adam Leventhal, Solaris Kernel Development > http://blogs.sun.com/ahl > _________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-codeDidn''t he give a use case? My use case is that I want to use ZFS for my archive disk, but before the disk gets too old I want to add another two disks when they are cheaper so I have a redundant 3 disk array. Another use case of mine is that I want to start a new storage server with 3 disks, but I anticipate running out of space by an unknown amount, so I want to be able to add a couple more disks when I need to. Seeing the demand for this is not rocket science. Hardware RAID5 adapters, NAS boxes, and Linux all support this feature. -- This messages posted from opensolaris.org
> Another use case of mine is that I want to start a new storage server > with 3 disks, but I anticipate running out of space by an unknown amount, > so I want to be able to add a couple more disks when I need to.You can do that with ZFS today. The conceptual difference is that instead of growing an existing RAID stripe, you add another one. To take your example, suppose you start life with 3 disks -- A, B, C. You''d create your pool by saying: zpool create mypool raidz A B C You could later grow the pool by adding three more, like this: zpool add mypool raidz D E F In practice, this is generally a more useful model because by the time you need three more disks, they''re often a different (higher) capacity. That capacity is wasted if you add them to an existing RAID stripe of smaller disks. To make this concrete, suppose A, B, C are 250G drives, and D, E, F are 750G drives. If you were using something like LVM and grew your RAID-5 stripe, you''d get 5 * 250G = 1.25T capacity. With two RAID-Z stripes in ZFS, you''d get 2 * 250G + 2 * 750G = 2.0T. Jeff
> You can do that with ZFS today.I appreciate what you are saying, but we are talking about different things. RAIDz does not let you do this: Start from one disk, add another disk to mirror the data, add another disk to make it a RAIDz array, and add another disk to increase the size of the RAIDz array. Solaris software RAID5, Linux software RAID5, and many hardware systems with RAID5-like functionality, all support adding a disk to a RAID5 array. So it makes sense that people expect the same feature from RAIDz, a system that appears very similar to RAID5. As of now, RAIDz is missing this check mark. Personally I don''t use RAIDz for this reason. -- This messages posted from opensolaris.org
I''ve read a lot of posts on this forum by people interested in using ZFS where they would use a NAS or a home server with RAID today, and I''m sure they have the same questions I do about expansion/recovery. I think the use case is that we want ZFS for the transactional features and because it doesn''t require hardware RAID, but we should be able to do everything we can do with an infrant (netgear) readynas NV+ or a drobo... in a nutshell. I want one storage pool (mount), because I don''t want to navigate through different drives/mounts to go through my movies/music/whatever. I want to at least be able to add drives later of the same size and still have one pool. I''m not sure if the other poster was talking about the infrant specifically when he said NAS, but I haven''t seen another NAS or external storage device that lets me add another drive and automatically expands. Infrant calls this RAID-X, and drobo says its something proprietary in their RAID. How does expansion work with other NAS like Thecus? Drobo also lets me add drives of any size and still have one drive, but drobo can only go up to 2TB without splitting into two drives (infrant can''t go past 2 TB at all yet). Another feature of drobo that infrant might not have is... if a drive fails... or if I just decide to pull the drive out while the box is running... all I need to do is place a new blank drive in and it will rebuild the pool automatically so I''m ready for another failure. how does this recovery work in ZFS? zpool create... zpool add... I can''t tell from what you''re saying... for instance I want to buy 2 1TB drives for a 4 or 5 bay setup. Later I want to add 1 TB at a time, until I fill the bays. It sounds like if I started with 2 drives, and then create a new stripe with only one drive... then that new stripe won''t have parity... what happens if that drive fails? being able to add drives of any size like the drobo isn''t essential... its nice to have... being able to pop out a drive and pop in a new one and have it automatically regenerate is something I accept I won''t have in an opensoalris based system I build myself just like I know its not going to detect a drive and auto expand without me doing something in a shell... but being able to add one drive, having full parity on all drives, and seeing all the files under a single mount in the filesystem are all important. -- This messages posted from opensolaris.org
On Sun, Jul 29, 2007 at 03:54:37PM -0700, Ryan Rhodes wrote:> I want one storage pool (mount), because I don''t want to navigate through > different drives/mounts to go through my movies/music/whatever. I want to > at least be able to add drives later of the same size and still have one > pool.Just to address a slight confusion, you can have a single pool with many associated RAID vdevs (stripes) and many different filesystems (mount points).> zpool create... zpool add... I can''t tell from what you''re saying... for > instance I want to buy 2 1TB drives for a 4 or 5 bay setup. Later I want > to add 1 TB at a time, until I fill the bays. It sounds like if I started > with 2 drives, and then create a new stripe with only one drive... then > that new stripe won''t have parity... what happens if that drive fails?You just have to add a stripe at a time rather than a single disk at a time. Adam -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
> RAIDz does not let you do this: Start from one disk, add another disk > to mirror the data, add another disk to make it a RAIDz array, and add > another disk to increase the size of the RAIDz array.That''s true: today you can''t expand a RAID-Z stripe or ''promote'' a mirror to be a RAID-Z stripe. Given the current architecture, I''m not sure how that would be done exactly, but it''s an interesting though experiment. How do other systems work? Do they take the pool offline while they migrate data to the new device in the RAID stripe or do they do this online? How would you propose this work with ZFS? Adam -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
They perform it while online. The operation takes an extensive amount of time... presumably due to the overhead involved in performing such an exhaustive amount of data manipulation. There are optimizations one could take but for simplicity, I expect this would be one way a hardware controller could expand a RAID5 array: -Keep track of "access method" address using "utility area" on the existing array (used to keep track of the address in the array beyond witch uses the "new" stripe size. needs to be kept updated on disk in case of power outage during array expansion). -Logically "relocate" first stripe of data on existing array to an area inside the "utility area" created previously for this purpose -Modify controller logic to add a temporary "stripe access method" check to the access algorithm (used from this point forward until expansion is complete) -Read data from full stripe on the disk starting at address 00 (Stripe "A") -Read additional data additional stripes on disk until the aggregation of stripe reads is greater than or equal to new stripe size -Write aggregated data in new stripe layout to previously empty stripe, plus blocks from newly added stripe members -Update "stripe access method" address -Read next stripe -Aggregate data left over from previously read stripe with next stripe -Write new stripe in similar fashion as above -Update "stripe access method" address -Wash, rinse, repeat -Write relocated stripe 00 back to beginning of array -Remove additional logic to check for "access method" for array How one would perform such an operation in ZFS is left as an exercise for the reader :) -=dave ----- Original Message ----- From: "Adam Leventhal" <ahl at eng.sun.com> To: "MC" <rac at eastlink.ca> Cc: <zfs-code at opensolaris.org> Sent: Monday, July 30, 2007 4:06 PM Subject: Re: [zfs-code] Raid-Z expansion>> RAIDz does not let you do this: Start from one disk, add another disk >> to mirror the data, add another disk to make it a RAIDz array, and add >> another disk to increase the size of the RAIDz array. > > That''s true: today you can''t expand a RAID-Z stripe or ''promote'' a mirror > to be a RAID-Z stripe. Given the current architecture, I''m not sure how > that would be done exactly, but it''s an interesting though experiment. > > How do other systems work? Do they take the pool offline while they > migrate > data to the new device in the RAID stripe or do they do this online? How > would you propose this work with ZFS? > > Adam > > -- > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code >
> You just have to add a stripe at a time rather than a > single disk at a time. > > AdamWhat does it mean to "add a stripe"? Does that mean I can add one disk or do I have to add two disks? Thanks, -Ryan -- This messages posted from opensolaris.org
Adding a new "stripe" refers to adding a new top level raidz vdev to the pool. Instead of adding a single disk to an existing raidz grouping (which isn''t going to buy you much in the first place), you add a new raidz group. Here''s an example using simple file vdevs: zion:~ root# zpool create raider raidz /var/root/vdev1 /var/root/ vdev2 /var/root/vdev3 zion:~ root# zpool status pool: raider state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM raider ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/root/vdev1 ONLINE 0 0 0 /var/root/vdev2 ONLINE 0 0 0 /var/root/vdev3 ONLINE 0 0 0 errors: No known data errors Now add a new raidz stripe to the raider pool: zion:~ root# zpool add raider raidz /var/root/vdev4 /var/root/vdev5 / var/root/vdev6 zion:~ root# zpool status raider pool: raider state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM raider ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/root/vdev1 ONLINE 0 0 0 /var/root/vdev2 ONLINE 0 0 0 /var/root/vdev3 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/root/vdev4 ONLINE 0 0 0 /var/root/vdev5 ONLINE 0 0 0 /var/root/vdev6 ONLINE 0 0 0 errors: No known data errors For more info and examples, you can also check out: http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf Noel On Jul 31, 2007, at 7:54 AM, Ryan Rhodes wrote:>> You just have to add a stripe at a time rather than a >> single disk at a time. >> >> Adam > > What does it mean to "add a stripe"? Does that mean I can add one > disk or do I have to add two disks? > > Thanks, > > -Ryan > -- > This messages posted from opensolaris.org > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code
> Do they take the pool offline while they migrate data to the new device in the RAID stripe or do they do this online?I think they (readynas, drobo, Linux mdadm) can do it online, but either way, getting the job done is what matters most. I''m talking about small consumer systems here, so single user backup devices or small office backup servers. They could handle going offline overnight to upgrade the redundant storage.> How would you propose this work with ZFS?Since expanding RAID5 striping with parity is pretty straightforward, I figured ZFS was just a more complex spin on the standard. I assumed the minds behind ZFS at Sun had the implementation details under control. Because if you need me to do it for you, we''re all screwed :) -- This messages posted from opensolaris.org
Thanks for explaining the constraints you''d like to see on any potential solution. It would be possible to create some sort of method for extending an existing RAID-Z stripe; it will be quite complicated. I think it''s fair to say that while the ZFS team at Sun is working on some facilities that will be required for this sort of migration, their priorities lie elsewhere. The OpenSolaris community at large, however, may see this as a high enough priority that some group wants to give it a shot. I suggest that you file an RFE at least. Adam On Mon, Jul 30, 2007 at 05:55:11PM -0700, Dave Johnson wrote:> They perform it while online. The operation takes an extensive amount of > time... presumably due to the overhead involved in performing such an > exhaustive amount of data manipulation. > > There are optimizations one could take but for simplicity, I expect this > would be one way a hardware controller could expand a RAID5 array: > > -Keep track of "access method" address using "utility area" on the existing > array (used to keep track of the address in the array beyond witch uses the > "new" stripe size. needs to be kept updated on disk in case of power outage > during array expansion). > > -Logically "relocate" first stripe of data on existing array to an area > inside the "utility area" created previously for this purpose > > -Modify controller logic to add a temporary "stripe access method" check to > the access algorithm (used from this point forward until expansion is > complete) > > -Read data from full stripe on the disk starting at address 00 (Stripe "A") > > -Read additional data additional stripes on disk until the aggregation of > stripe reads is greater than or equal to new stripe size > > -Write aggregated data in new stripe layout to previously empty stripe, plus > blocks from newly added stripe members > > -Update "stripe access method" address > > -Read next stripe > > -Aggregate data left over from previously read stripe with next stripe > > -Write new stripe in similar fashion as above > > -Update "stripe access method" address > > -Wash, rinse, repeat > > -Write relocated stripe 00 back to beginning of array > > -Remove additional logic to check for "access method" for array > > > How one would perform such an operation in ZFS is left as an exercise for > the reader :) > > -=dave > > ----- Original Message ----- > From: "Adam Leventhal" <ahl at eng.sun.com> > To: "MC" <rac at eastlink.ca> > Cc: <zfs-code at opensolaris.org> > Sent: Monday, July 30, 2007 4:06 PM > Subject: Re: [zfs-code] Raid-Z expansion > > > >> RAIDz does not let you do this: Start from one disk, add another disk > >> to mirror the data, add another disk to make it a RAIDz array, and add > >> another disk to increase the size of the RAIDz array. > > > > That''s true: today you can''t expand a RAID-Z stripe or ''promote'' a mirror > > to be a RAID-Z stripe. Given the current architecture, I''m not sure how > > that would be done exactly, but it''s an interesting though experiment. > > > > How do other systems work? Do they take the pool offline while they > > migrate > > data to the new device in the RAID stripe or do they do this online? How > > would you propose this work with ZFS? > > > > Adam > > > > -- > > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > > _______________________________________________ > > zfs-code mailing list > > zfs-code at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-code > > > > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code-- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
> > You just have to add a stripe at a time rather than > a > > single disk at a time. > > > > Adam > > What does it mean to "add a stripe"? Does that mean > I can add one disk or do I have to add two disks?I expect he means adding another raid-z vdev to the zpool, i.e. more than one disk. Obviously, if your main goal is to be maximally cheap, that''s not ideal. And to my mind, a zpool consisting of multiple raid-z''s is less than ideal too, because two drive''s worth of parity would be better used spread across the whole pool than each isolated to part of it. You could add a single drive as a vdev unto itself, but that would have no redundancy compared with the rest of the zpool, and either having no redundancy on a zpool (which if the non-redundant device fails would presently cause a panic, I think), or having vdevs at different levels of redundancy within a zpool, aren''t at all a good idea. Of course it could be done offline with a backup, destroy and re-create the pool larger, and restore. But that would require something big enough to write the backup to. Doing it online strikes me as quite tricky to say the least (two different stripe sizes while growing, among other things). Apparently some other volume management implementations can do it, from what people have been saying. But it would take one of the gurus to say whether there is anything about zfs that would make it more difficult. -- This messages posted from opensolaris.org
Hello Adam, Tuesday, July 31, 2007, 12:06:40 AM, you wrote: AL> How do other systems work? Do they take the pool offline while they migrate AL> data to the new device in the RAID stripe or do they do this online? How AL> would you propose this work with ZFS? You can expand stripe size or change raid level all the way with VxVM and online. I''ve used it only once... -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Hi All, As part of my CS dissertation I was playing with ZFS and attempting to improve upon the standard RAID-5esque feature set of RAID-Z. This is one of the features I has hoped to get working (though in the end didn''t finish), though I think I have a reasonable idea of how it could be done if anyone else is interested in lending a hand... The project aimed to do two things: 1) Use all available space on a RAID-Z with mismatched disks (i.e. don''t cap to size of the smallest). 2) Allow the user to grow(shrink) the array. With 1 there is the difficulty that the full stripe width changes with respect to logical volume offset. What I ended up doing was specializing the space map initialization code to be vdev_ specific, with the RAID-Z vdev configuring the spacemap such that the metaslabs boundaries align with changes in the number of disks in a full sized stripe. This makes is much easier when performing translation of lsize to asize which now needs to be done once per metaslab during allocation rather than once per vdev... I tested these changes by modifying small amounts of infrastructure (ztest/zdb) to allow the creation of arrays with varying sized disks (and ztest reporting no problems). I also tested performance at a macro level (IOZone) and a micro level dtrace with averaging the thread time taken to execute modified functions. In this implementation there was minimal (I think!) additional computational overhead O(ndisks) for metaslab_distance,fre_dva and O(lg n) for raidz_asize. DTrace showed almost no change in execution times (for reasonable numbers of disks -- it starts getting interesting/visible at 64 devices in the RAID :) ). And IOZone showed Write speed to be almost identical and read speed to be ~7% worse -- I suspect this is down to my dodgy disk array 1x40, 3x120Gb disks. With these changes total available space was 257G as opposed to standard RAID-Zs 113G :D There was one regression however: replacing disks. This will need the same infrastructure as adding a disk to the array (2). My idea for fixing this was to use the GRID part of the block pointer to allow versions of the RAID disk array. Whenever a disk is added/replaced, the space map is ''munged'', the layout for the new array is created and stored, and the GRID incremented. The spacemap then always contains a map of free space in the current disk arrangement; an in memory array of disk arrays is used for resolving blocks, with each block individually address by <vdev,offset,grid>. New blocks are always written using the most recent value of GRID. My vdev_raidz code currently takes account of GRID when resolving blocks. What I haven''t implemented is updating the space map and persisting the versions of the array. These both strike me as relatively hard. For example: I''m unsure how to go about locking and modifying the whole spacemap while the pool is mounted. Also when freeing blocks there needs to be a mechanism to iterate through the raidz versions such that the previous grid/offset can be translated to blocks in the current grid''s space map. Also difficult is the way the RAID-Z currently accounts for single space holes of unallocable space by rounding up writes. If a disk is added, we end up with single block holes everywhere. One solution is to possible let the metaslab handle it, passivating the metaslab when there are no block runs of the minimum size remaining -- to me the metaslab code is quite scary with magical bit manipulation; and the hairy interaction between alloc_dva and group_alloc isn''t well commented, and non-intuitive... The beauty of this is that with RAID-Z you don''t have to rewrite all the data in one go when adding to the array, which, as has already been mentioned, is cumbersome and slow. Of course you won''t get all the disk space available at once, but with copy-on-write, data will spread over all the disks gradually. The 256 change limit is restrictive, but if Sun add the ''rewrite data'' facility, this can be overcome (and support for removing disks can be a simple extension). While I didn''t get a chance to finish the above (it turns out the dissertation text is more important than the code...) having graduated, started work and moved house just this weekend, it would be nice to continue work on a project in the evenings/weekends, at the very least I still have a bunch of random disks that need a reliable filesystem ;). If anyone is interested or is familiar with ZFS internals and has any input on the issues I''ve mentioned above I''d be hugely grateful -- in particular: rewriting the space map on the fly (and updating metaslabs to point at the new regions of the array), persisting the array layout to disk, and RAID-Z roundup, I''m all ears! Cheers, James On 7/31/07, Adam Leventhal <ahl at eng.sun.com> wrote:> Thanks for explaining the constraints you''d like to see on any potential > solution. It would be possible to create some sort of method for extending > an existing RAID-Z stripe; it will be quite complicated. > > I think it''s fair to say that while the ZFS team at Sun is working on some > facilities that will be required for this sort of migration, their priorities > lie elsewhere. The OpenSolaris community at large, however, may see this as > a high enough priority that some group wants to give it a shot. I suggest > that you file an RFE at least. > > Adam > > On Mon, Jul 30, 2007 at 05:55:11PM -0700, Dave Johnson wrote: > > They perform it while online. The operation takes an extensive amount of > > time... presumably due to the overhead involved in performing such an > > exhaustive amount of data manipulation. > > > > There are optimizations one could take but for simplicity, I expect this > > would be one way a hardware controller could expand a RAID5 array: > > > > -Keep track of "access method" address using "utility area" on the existing > > array (used to keep track of the address in the array beyond witch uses the > > "new" stripe size. needs to be kept updated on disk in case of power outage > > during array expansion). > > > > -Logically "relocate" first stripe of data on existing array to an area > > inside the "utility area" created previously for this purpose > > > > -Modify controller logic to add a temporary "stripe access method" check to > > the access algorithm (used from this point forward until expansion is > > complete) > > > > -Read data from full stripe on the disk starting at address 00 (Stripe "A") > > > > -Read additional data additional stripes on disk until the aggregation of > > stripe reads is greater than or equal to new stripe size > > > > -Write aggregated data in new stripe layout to previously empty stripe, plus > > blocks from newly added stripe members > > > > -Update "stripe access method" address > > > > -Read next stripe > > > > -Aggregate data left over from previously read stripe with next stripe > > > > -Write new stripe in similar fashion as above > > > > -Update "stripe access method" address > > > > -Wash, rinse, repeat > > > > -Write relocated stripe 00 back to beginning of array > > > > -Remove additional logic to check for "access method" for array > > > > > > How one would perform such an operation in ZFS is left as an exercise for > > the reader :) > > > > -=dave > > > > ----- Original Message ----- > > From: "Adam Leventhal" <ahl at eng.sun.com> > > To: "MC" <rac at eastlink.ca> > > Cc: <zfs-code at opensolaris.org> > > Sent: Monday, July 30, 2007 4:06 PM > > Subject: Re: [zfs-code] Raid-Z expansion > > > > > > >> RAIDz does not let you do this: Start from one disk, add another disk > > >> to mirror the data, add another disk to make it a RAIDz array, and add > > >> another disk to increase the size of the RAIDz array. > > > > > > That''s true: today you can''t expand a RAID-Z stripe or ''promote'' a mirror > > > to be a RAID-Z stripe. Given the current architecture, I''m not sure how > > > that would be done exactly, but it''s an interesting though experiment. > > > > > > How do other systems work? Do they take the pool offline while they > > > migrate > > > data to the new device in the RAID stripe or do they do this online? How > > > would you propose this work with ZFS? > > > > > > Adam > > > > > > -- > > > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > > > _______________________________________________ > > > zfs-code mailing list > > > zfs-code at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-code > > > > > > > _______________________________________________ > > zfs-code mailing list > > zfs-code at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-code > > -- > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code >
It sounds like you might be the man to start this job off. Have you tried looking for contributors/assistance on any of the other lists, like Storage, for example? Some of the systems people might be able to give you a hand. I don''t have the skills to help you directly, but I''d like to offer my encouragement to look through the community lists for help. Your work would really be a plus to the Open Solaris/Sol 10 movement. (I think you might want to do some research on how to get your project into the OSol codestream - I think you need some votes from contributors plus a sponsor?) cheers, Blake -- This messages posted from opensolaris.org
Came to know more about this kind of issues when i have to look on a research based on this subject. Once i used to <a href"http://www.bestresearchpaper.co.uk">buy research paper</a> for my academic assignments based on this topic and its full of contents which helped most to get in contact with. -- This message posted from opensolaris.org