Now I know this is counterculture, but it''s biting me in the back side right now, and ruining my life. I have a storage array (iSCSI SAN) that is performing badly, and requires some upgrades/reconfiguration. I have a second storage array that I wanted to set up as a ZFS mirror so I could free the bad array for upgrades. The live array is only 15% utilized. It is 3.82TB in size. The second array that I setup up is just short of that at 3.7TB. Obviously I can''t set this up as a mirror, since it''s too small. But given the low utilization, why the heck not? The way ZFS works, there is no reason why you can''t shrink a pool with a smaller mirror in the same way you could grow it by detaching a mirror to larger storage? It may require an export/import or the like, but why not? What I''m left with now is to do more expensive modifications to the new mirror to increase its size, or using zfs send | receive or rsync to copy the data, and have an extended down time for our users. Yuck! Related to this, if I''m going to generate down time, why don''t I just forget the SAN, and move the whole thing to a NAS solution, using NFS with Solaris instead on the SAN box? It''s just commodity x86 server hardware. My life is ruined by too many choices, and not enough time to evaluate everything. Jon -- - _____/ _____/ / - Jonathan Loran - - - / / / IT Manager - - _____ / _____ / / Space Sciences Laboratory, UC Berkeley - / / / (510) 643-5146 jloran at ssl.berkeley.edu - ______/ ______/ ______/ AST:7731^29u18e3
On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote:> > Now I know this is counterculture, but it''s biting me in the back side > right now, and ruining my life. > > I have a storage array (iSCSI SAN) that is performing badly, and > requires some upgrades/reconfiguration. I have a second storage array > that I wanted to set up as a ZFS mirror so I could free the bad array > for upgrades. The live array is only 15% utilized. It is 3.82TB in > size. The second array that I setup up is just short of that at > 3.7TB. > Obviously I can''t set this up as a mirror, since it''s too small. But > given the low utilization, why the heck not? The way ZFS works, there > is no reason why you can''t shrink a pool with a smaller mirror in the > same way you could grow it by detaching a mirror to larger storage? > It > may require an export/import or the like, but why not?You can''t shrink a pool.> > > What I''m left with now is to do more expensive modifications to the > new > mirror to increase its size, or using zfs send | receive or rsync to > copy the data, and have an extended down time for our users. Yuck!Why do you need extended downtime to use zfs send|recv? I would think that the only outage you would need to take would be on cutover to the new array. The following may help: http://blogs.sun.com/constantin/entry/useful_zfs_snapshot_replicator_script> > > Related to this, if I''m going to generate down time, why don''t I just > forget the SAN, and move the whole thing to a NAS solution, using NFS > with Solaris instead on the SAN box? It''s just commodity x86 server > hardware. > > My life is ruined by too many choices, and not enough time to evaluate > everything. > > Jon > > -- > > > - _____/ _____/ / - Jonathan Loran > - - > - / / / IT > Manager - > - _____ / _____ / / Space Sciences Laboratory, UC > Berkeley > - / / / (510) 643-5146 > jloran at ssl.berkeley.edu > - ______/ ______/ ______/ AST:7731^29u18e3 > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Shawn Ferry shawn.ferry at sun.com Senior Primary Systems Engineer Sun Managed Operations 571.291.4898
Shawn Ferry wrote:> On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote: > > >> Now I know this is counterculture, but it''s biting me in the back side >> right now, and ruining my life. >> >> I have a storage array (iSCSI SAN) that is performing badly, and >> requires some upgrades/reconfiguration. I have a second storage array >> that I wanted to set up as a ZFS mirror so I could free the bad array >> for upgrades. The live array is only 15% utilized. It is 3.82TB in >> size. The second array that I setup up is just short of that at >> 3.7TB. >> Obviously I can''t set this up as a mirror, since it''s too small. But >> given the low utilization, why the heck not? The way ZFS works, there >> is no reason why you can''t shrink a pool with a smaller mirror in the >> same way you could grow it by detaching a mirror to larger storage? >> It >> may require an export/import or the like, but why not? >> > > You can''t shrink a pool. >Not now at least. Probably not ever.> >> What I''m left with now is to do more expensive modifications to the >> new >> mirror to increase its size, or using zfs send | receive or rsync to >> copy the data, and have an extended down time for our users. Yuck! >> > > Why do you need extended downtime to use zfs send|recv? I would think > that the only outage you would need to take would be on cutover to the > new > array. > > The following may help: > http://blogs.sun.com/constantin/entry/useful_zfs_snapshot_replicator_script > >That is very useful, thanks! Most of my work done for me. And I thought I would have to write this myself. From a tunning perspective however, I''m still thinking about canning the whole iSCSI idea for this particular box and setting up a separate standalone NFS/ZFS server. I can still use this script to get data across. Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080303/227cea1e/attachment.html>
Jonathan, On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote:> What I''m left with now is to do more expensive modifications to the new > mirror to increase its size, or using zfs send | receive or rsync to > copy the data, and have an extended down time for our users. Yuck!Not sure if this is going to help you, as I do not know how your restructuring impacts the available space on your old array. Here''s an idea: Create a sparse zvol on the new array and attach it to the old array. # mkdir test # mkfile 150m /test/old; mkfile 100m /test/new # zpool create foo /test/old # zpool create bar /test/new # zfs create -s -V 150m bar/vol # mkfile 20m /foo/test.file # zpool attach foo /test/old /dev/zvol/dsk/bar/vol # zpool detach foo /test/old Greetings, Patrick
Patrick Bachmann wrote:> Jonathan, > > On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote: > >> What I''m left with now is to do more expensive modifications to the new >> mirror to increase its size, or using zfs send | receive or rsync to >> copy the data, and have an extended down time for our users. Yuck! >> > > Not sure if this is going to help you, as I do not know how your > restructuring impacts the available space on your old array. > Here''s an idea: Create a sparse zvol on the new array and attach it > to the old array. > > # mkdir test > # mkfile 150m /test/old; mkfile 100m /test/new > # zpool create foo /test/old > # zpool create bar /test/new > # zfs create -s -V 150m bar/vol > # mkfile 20m /foo/test.file > # zpool attach foo /test/old /dev/zvol/dsk/bar/vol > # zpool detach foo /test/old >I''m ''not sure I follow how this would work. I do have tons of space on the old array. It''s only 15% utilized, hence my original comment. How does my data get into the /test/old zvol (zpool foo)? What would I end up with. This seems a bit like black magic. Maybe that''s what I need, eh? Jon
Jonathan, On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote:> I''m ''not sure I follow how this would work.The keyword here is thin provisioning. The sparse zvol only uses as much space as the actual data needs. So, if you use a sparse zvol, you may mirror to a smaller "disk", iff you use as much space as is physically available to the sparse zvol.> I do have tons of space on > the old array. It''s only 15% utilized, hence my original comment. How > does my data get into the /test/old zvol (zpool foo)? What would I end > up with.There''s no zvol on foo. After detaching /test/old, you may reconfigure your old array. At that point, foo is on a zvol on the pool bar. In what way to get the data over depends on how your reconfiguration of the old array impacts the pool and vdev size. If it gets smaller, you cannot attach it to the pool where your data currently resides and have to go the send|receive route... Putting the zpool on a zvol permanently might not be something you want as this creates some overhead, I can''t quantisize, and you mentioned some performance issues you''re already experiencing.> This seems a bit like black magic. Maybe that''s what I need, > eh?Feel the magic at http://www.cuddletech.com/blog/pivot/entry.php?id=729 Greetings, Patrick
Patrick Bachmann wrote:> Jonathan, > > On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote: > >> I''m ''not sure I follow how this would work. >> > > The keyword here is thin provisioning. The sparse zvol only uses > as much space as the actual data needs. So, if you use a sparse > zvol, you may mirror to a smaller "disk", iff you use as much > space as is physically available to the sparse zvol. > > >> I do have tons of space on >> the old array. It''s only 15% utilized, hence my original comment. How >> does my data get into the /test/old zvol (zpool foo)? What would I end >> up with. >> > > There''s no zvol on foo. After detaching /test/old, you may > reconfigure your old array. At that point, foo is on a zvol on > the pool bar. > In what way to get the data over depends on how your > reconfiguration of the old array impacts the pool and vdev size. > If it gets smaller, you cannot attach it to the pool where your > data currently resides and have to go the send|receive route... > > Putting the zpool on a zvol permanently might not be something > you want as this creates some overhead, I can''t quantisize, and > you mentioned some performance issues you''re already > experiencing. >Well, there''s the rub. I will be reconfiguring the old array identical to the new one. It will be smaller. It''s always something, isn''t it. I have to say though, this is very slick and I can see this sparse zvol trick will be handy in the future. Thanks! Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080304/1acf91a4/attachment.html>