Darren J Moffat
2008-Jul-10 10:42 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
I regularly create new zfs filesystems or snapshots and I find it annoying that I have to type the full dataset name in all of those cases. I propose we allow zfs(1) to infer the part of the dataset name upto the current working directory. For example: Today: $ zfs create cube/builds/darrenm/bugs/6724478 With this proposal: $ pwd /cube/builds/darrenm/bugs $ zfs create 6724478 Both of these would result in a new dataset cube/builds/darrenm/6724478 This will need some careful though about how to deal with cases like this: $ pwd /cube/builds/ $ zfs create 6724478/test What should that do ? should it create cube/builds/6724478 and cube/builds/6724478/test ? Or should it fail ? -p already provides some capbilities in this area. Maybe the easiest way out of the ambiquity is to add a flag to zfs create for the partial dataset name eg: $ pwd /cube/builds/darrenm/bugs $ zfs create -c 6724478 Why "-c" ? -c for "current directory" "-p" partial is already taken to mean "create all non existing parents" and "-r" relative is already used consistently as "recurse" in other zfs(1) commands (as well as lots of other places). Alternately: $ pwd /cube/builds/darrenm/bugs $ zfs mkdir 6724478 Which would act like mkdir does (including allowing a -p and -m flag with the same meaning as mkdir(1)) but creates datasets instead of directories. Thoughts ? Is this useful for anyone else ? My above examples are some of the shorter dataset names I use, ones in my home directory can be even deeper. -- Darren J Moffat
Mark Phalan
2008-Jul-10 11:01 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, 2008-07-10 at 11:42 +0100, Darren J Moffat wrote:> I regularly create new zfs filesystems or snapshots and I find it > annoying that I have to type the full dataset name in all of those cases. > > I propose we allow zfs(1) to infer the part of the dataset name upto the > current working directory. For example: > > Today: > > $ zfs create cube/builds/darrenm/bugs/6724478 > > With this proposal: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs create 6724478 > > Both of these would result in a new dataset cube/builds/darrenm/6724478I find this annoying as well. Another way that would help (but is fairly orthogonal to your suggestion) would be to write a completion module for zsh/bash/whatever that could <tab>-complete options to the z* commands including zfs filesystems. -M
On Thu, 2008-07-10 at 13:01 +0200, Mark Phalan wrote:> > Both of these would result in a new dataset cube/builds/darrenm/6724478 > > I find this annoying as well. Another way that would help (but is fairly > orthogonal to your suggestion) would be to write a completion module for > zsh/bash/whatever that could <tab>-complete options to the z* commands > including zfs filesystems.Mark Musante (famous for recently beating the crap out of lu) wrote one of these - http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/bash_tabcompletion_ I don''t use bash, but would love a ksh93 version of this! cheers, tim
Mark J Musante
2008-Jul-10 11:12 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, 10 Jul 2008, Mark Phalan wrote:> I find this annoying as well. Another way that would help (but is fairly > orthogonal to your suggestion) would be to write a completion module for > zsh/bash/whatever that could <tab>-complete options to the z* commands > including zfs filesystems.You mean something like this? http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/bash_tabcompletion_ Regards, markm
Mark Phalan
2008-Jul-10 11:22 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, 2008-07-10 at 07:12 -0400, Mark J Musante wrote:> On Thu, 10 Jul 2008, Mark Phalan wrote: > > > I find this annoying as well. Another way that would help (but is fairly > > orthogonal to your suggestion) would be to write a completion module for > > zsh/bash/whatever that could <tab>-complete options to the z* commands > > including zfs filesystems. > > You mean something like this? > > http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/bash_tabcompletion_Yes! Exactly! Now I just need to re-write it for zsh.. Thanks, -M
Mark J Musante
2008-Jul-10 11:41 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, 10 Jul 2008, Tim Foster wrote:> Mark Musante (famous for recently beating the crap out of lu)Heh. Although at this point it''s hard to tell who''s the beat-er and who''s the beat-ee... Regards, markm
Hi all, I''m a little (ok, a lot) confused on the whole zfs send/receive commands. I''ve seen mention of using zfs send between two different machines, but no good howto in order to make it work. I have one try-n-buy x4500 that we are trying to move data from onto a new x4500 that we''ve purchased. Right now I''m using rsync over ssh (via 1GB/s network) to copy the data but it is almost painfully slow (700GB over 24 hours). Yeah, it''s a load of small files for the most part. Anyway, would zfs send/receive work better? Do you have to set up a service on the receiving machine in order to receive the zfs stream? The machine is an x4500 running Solaris 10 u5. Thanks Dave David Glaser Systems Administrator LSA Information Technology University of Michigan
Carson Gaspar
2008-Jul-10 14:43 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Darren J Moffat wrote:> Today: > > $ zfs create cube/builds/darrenm/bugs/6724478 > > With this proposal: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs create 6724478 > > Both of these would result in a new dataset cube/builds/darrenm/6724478...> Maybe the easiest way out of the ambiquity is to add a flag to zfs > create for the partial dataset name eg: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs create -c 6724478 > > Why "-c" ? -c for "current directory" "-p" partial is already taken to > mean "create all non existing parents" and "-r" relative is already used > consistently as "recurse" in other zfs(1) commands (as well as lots of > other places).Why not "zfs create $PWD/6724478". Works today, traditional UNIX behaviour, no coding required. Unles you''re in some bizarroland shell (like csh?)... -- Carson
Carson Gaspar wrote:> Darren J Moffat wrote: > > $ pwd > > /cube/builds/darrenm/bugs > > $ zfs create -c 6724478 > > > > Why "-c" ? -c for "current directory" "-p" partial is > already taken to > > mean "create all non existing parents" and "-r" relative is > already used > > consistently as "recurse" in other zfs(1) commands (as well > as lots of > > other places). > > Why not "zfs create $PWD/6724478". Works today, traditional UNIX > behaviour, no coding required. Unles you''re in some bizarroland shell > (like csh?)...Because the zfs dataset mountpoint may not be the same as the zfs pool name. This makes things a bit complicated for the initial request. Personally, I haven''t played with datasets where the mountpoint is different. If you have a zpool tank mounted on /tank and /tank/homedirs with mountpoint=/export/home, do you create the next dataset /tank/homedirs/carson, or /export/home/carson ? And does the mountpoint get inherited in the obvious (vs. the simple vs. not at all) way? I don''t know. Also $PWD has a leading / in this example. --Joe
Darren J Moffat
2008-Jul-10 15:32 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Carson Gaspar wrote:> Why not "zfs create $PWD/6724478". Works today, traditional UNIX > behaviour, no coding required. Unles you''re in some bizarroland shellDid you actually try that ? braveheart# echo $PWD /tank/p2/2/1 braveheart# zfs create $PWD/44 cannot create ''/tank/p2/2/1/44'': leading slash in name It work because zfs create takes a dataset name but $PWD will give you a pathname starting with /. Dataset names don''t start with /. Also this assumes that your mountpoint hierarchy is identical to your dataset name hierarchy (other than the leading /) which isn''t necessarily true, ie if any of the datasets have a non default mountpoint property. -- Darren J Moffat
Glaser, David wrote:> Hi all, > > I''m a little (ok, a lot) confused on the whole zfs send/receive commands.> I''ve seen mention of using zfs send between two different machines, > but no good howto in order to make it work. zfs(1) man page, Examples 12 and 13 show how to use senn/receive with ssh. What isn''t clear about them ? > Do you have to set up a service on the receiving machine in order to receive the zfs stream? No. -- Darren J Moffat
Mike Gerdts
2008-Jul-10 15:56 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <Darren.Moffat at sun.com> wrote:> Thoughts ? Is this useful for anyone else ? My above examples are some > of the shorter dataset names I use, ones in my home directory can be > even deeper.Quite usable and should be done. The key problem I see is how to deal with ambiguity. # zpool create pool # zfs create pool/home # zfs set mountpoint=/home pool/home # zfs create pool/home/adams (for Dilbert''s master) ... # zfs create pool/home/gerdts (for Me) ... # zfs create pool/home/pool (for Ms. Pool) ... # cd /home # zfs snapshot pool at now What just got snapshotted? My vote would be that it would try the traditional match first, then try to do it by resolving the path. That is, if it would have failed in the past, it should see if the specified path is the root (mountpoint?) of a data set. That way things like the following should work unambigously: # zfs snapshot ./pool at now # zfs snapshot `pwd`/pool at now -- Mike Gerdts http://mgerdts.blogspot.com/
On Thu, 10 Jul 2008, Glaser, David wrote:> x4500 that we''ve purchased. Right now I''m using rsync over ssh (via > 1GB/s network) to copy the data but it is almost painfully slow > (700GB over 24 hours). Yeah, it''s a load of small files for the most > part. Anyway, would zfs send/receive work better? Do you have to set > up a service on the receiving machine in order to receive the zfs > stream?You don''t need to set up a service on the remote machine. You can use ssh to invoke the zfs receive and pipe the data across the ssh connection, which is similar to what rsync is doing. For example (from the zfs docs): zfs send tank/cindy at today | ssh newsys zfs recv sandbox/restfs at today For a fresh copy, the bottleneck is quite likely ssh itself. Ssh uses fancy encryption algorithms which take lots of CPU time and really slows things down. The "blowfish" algorithm seems to be fastest so passing -c blowfish As an ssh option can significantly speed things up. For example, this is how you can tell rsync to use ssh with your own options: --rsh=''/usr/bin/ssh -c blowfish'' In order to achieve even more performance (but without encryption), you can use Netcat as the underlying transport. See http://netcat.sourceforge.net/. Lastly, if you have much more CPU available than bandwidth, then it is worthwhile to install and use the ''lzop'' compression program which compresses very quickly to a format only about 30% less compressed than what gzip achieves but fast enough for real-time data transmission. It is easy to insert lzop into the pipeline so that less data is sent across the network. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Darren J Moffat wrote:> Glaser, David wrote: > >> Hi all, >> >> I''m a little (ok, a lot) confused on the whole zfs send/receive commands. >> > > I''ve seen mention of using zfs send between two different machines, > > but no good howto in order to make it work. > > zfs(1) man page, Examples 12 and 13 show how to use senn/receive with > ssh. What isn''t clear about them ? > > > Do you have to set up a service on the receiving machine in order to > receive the zfs stream? > > No. >I found that the overhead of SSH really hampered my ability to transfer data between thumpers as well. When I simply ran a set of sockets and a pipe things went much faster (filled a 1G link). Essentially I used netcat instead of SSH. -Tim
On Thu, Jul 10, 2008 at 09:02:35AM -0700, Tim Spriggs wrote:> > zfs(1) man page, Examples 12 and 13 show how to use senn/receive with > > ssh. What isn''t clear about them ? > > I found that the overhead of SSH really hampered my ability to transfer > data between thumpers as well. When I simply ran a set of sockets and a > pipe things went much faster (filled a 1G link). Essentially I used > netcat instead of SSH.You can use blowfish [0] or arcfour [1] as they are faster than the default algorithm (3des). Cheers, florin 0: ssh(1) man page 1: http://www.psc.edu/networking/projects/hpn-ssh/theory.php -- Bruce Schneier expects the Spanish Inquisition. http://geekz.co.uk/schneierfacts/fact/163 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080710/9f7343b8/attachment.bin>
Florin Iucha wrote:> On Thu, Jul 10, 2008 at 09:02:35AM -0700, Tim Spriggs wrote: >>> zfs(1) man page, Examples 12 and 13 show how to use senn/receive with >>> ssh. What isn''t clear about them ? >> I found that the overhead of SSH really hampered my ability to transfer >> data between thumpers as well. When I simply ran a set of sockets and a >> pipe things went much faster (filled a 1G link). Essentially I used >> netcat instead of SSH. > > You can use blowfish [0] or arcfour [1] as they are faster than the > default algorithm (3des).The default algorithm for ssh on Solaris is not 3des it is aes128-ctr. -- Darren J Moffat
Darren J Moffat
2008-Jul-10 16:31 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Mike Gerdts wrote:> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <Darren.Moffat at sun.com> wrote: >> Thoughts ? Is this useful for anyone else ? My above examples are some >> of the shorter dataset names I use, ones in my home directory can be >> even deeper. > > Quite usable and should be done. > > The key problem I see is how to deal with ambiguity. > > # zpool create pool > # zfs create pool/home > # zfs set mountpoint=/home pool/home > # zfs create pool/home/adams (for Dilbert''s master) > ... > # zfs create pool/home/gerdts (for Me) > ... > # zfs create pool/home/pool (for Ms. Pool) > ... > # cd /home > # zfs snapshot pool at now > > What just got snapshotted?The dataset named "pool" only. I don''t see how that could be ambiguous now or with what I proposed. If you said zfs snapshot -r pool at now then all of them. -- Darren J Moffat
Is that faster than blowfish? Dave -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Darren J Moffat Sent: Thursday, July 10, 2008 12:27 PM To: Florin Iucha Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS send/receive questions Florin Iucha wrote:> On Thu, Jul 10, 2008 at 09:02:35AM -0700, Tim Spriggs wrote: >>> zfs(1) man page, Examples 12 and 13 show how to use senn/receive with >>> ssh. What isn''t clear about them ? >> I found that the overhead of SSH really hampered my ability to transfer >> data between thumpers as well. When I simply ran a set of sockets and a >> pipe things went much faster (filled a 1G link). Essentially I used >> netcat instead of SSH. > > You can use blowfish [0] or arcfour [1] as they are faster than the > default algorithm (3des).The default algorithm for ssh on Solaris is not 3des it is aes128-ctr. -- Darren J Moffat _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike Gerdts
2008-Jul-10 16:42 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <Darren.Moffat at sun.com> wrote:> Mike Gerdts wrote: >> >> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <Darren.Moffat at sun.com> >> wrote: >>> >>> Thoughts ? Is this useful for anyone else ? My above examples are some >>> of the shorter dataset names I use, ones in my home directory can be >>> even deeper. >> >> Quite usable and should be done. >> >> The key problem I see is how to deal with ambiguity. >> >> # zpool create pool >> # zfs create pool/home >> # zfs set mountpoint=/home pool/home >> # zfs create pool/home/adams (for Dilbert''s master) >> ... >> # zfs create pool/home/gerdts (for Me) >> ... >> # zfs create pool/home/pool (for Ms. Pool) >> ... >> # cd /home >> # zfs snapshot pool at now >> >> What just got snapshotted? > > The dataset named "pool" only. I don''t see how that could be ambiguous now > or with what I proposed. > > If you said zfs snapshot -r pool at now then all of them.Which dataset named pool? The one at /pool (the root of the zpool, if you will) or the one at /home/pool (Ms. Pool''s home directory) which happens to be `pwd`/pool. -- Mike Gerdts http://mgerdts.blogspot.com/
I guess what I was wondering if there was a direct method rather than the overhead of ssh. -----Original Message----- From: Darren.Moffat at Sun.COM [mailto:Darren.Moffat at Sun.COM] On Behalf Of Darren J Moffat Sent: Thursday, July 10, 2008 11:40 AM To: Glaser, David Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS send/receive questions Glaser, David wrote:> Hi all, > > I''m a little (ok, a lot) confused on the whole zfs send/receive commands.> I''ve seen mention of using zfs send between two different machines, > but no good howto in order to make it work. zfs(1) man page, Examples 12 and 13 show how to use senn/receive with ssh. What isn''t clear about them ? > Do you have to set up a service on the receiving machine in order to receive the zfs stream? No. -- Darren J Moffat
Darren J Moffat
2008-Jul-10 16:48 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Mike Gerdts wrote:> On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <Darren.Moffat at sun.com> wrote: >> Mike Gerdts wrote: >>> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <Darren.Moffat at sun.com> >>> wrote: >>>> Thoughts ? Is this useful for anyone else ? My above examples are some >>>> of the shorter dataset names I use, ones in my home directory can be >>>> even deeper. >>> Quite usable and should be done. >>> >>> The key problem I see is how to deal with ambiguity. >>> >>> # zpool create pool >>> # zfs create pool/home >>> # zfs set mountpoint=/home pool/home >>> # zfs create pool/home/adams (for Dilbert''s master) >>> ... >>> # zfs create pool/home/gerdts (for Me) >>> ... >>> # zfs create pool/home/pool (for Ms. Pool) >>> ... >>> # cd /home >>> # zfs snapshot pool at now >>> >>> What just got snapshotted? >> The dataset named "pool" only. I don''t see how that could be ambiguous now >> or with what I proposed. >> >> If you said zfs snapshot -r pool at now then all of them. > > Which dataset named pool? The one at /pool (the root of the zpool, if > you will) or the one at /home/pool (Ms. Pool''s home directory) which > happens to be `pwd`/pool.Ah sorry I missed that your third dataset ended in pool. The answer is still the same though if the proposal to use a new flag for partial paths is taken. Which is why I suggested that, it is ambiguous in the example you gave if zfs(1) commands other than create can take relative paths too [ which would be useful ]. -- Darren J Moffat
Glaser, David wrote:> I guess what I was wondering if there was a direct method rather than the overhead of ssh.As others have suggested use netcat (/usr/bin/nc) however you get no over the wire data confidentiality or integrity and no strong authentication with that. If you need those then a combination of netcat and IPsec might help. -- Darren J Moffat
Thankfully right now it''s between a private IP network between the two machines. I''ll play with it a bit and let folks know if I can''t get it to work. Thanks, Dave -----Original Message----- From: Darren.Moffat at Sun.COM [mailto:Darren.Moffat at Sun.COM] On Behalf Of Darren J Moffat Sent: Thursday, July 10, 2008 12:50 PM To: Glaser, David Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS send/receive questions Glaser, David wrote:> I guess what I was wondering if there was a direct method rather than the overhead of ssh.As others have suggested use netcat (/usr/bin/nc) however you get no over the wire data confidentiality or integrity and no strong authentication with that. If you need those then a combination of netcat and IPsec might help. -- Darren J Moffat
On Thu, Jul 10, 2008 at 12:43, Glaser, David <dsglaser at umich.edu> wrote:> I guess what I was wondering if there was a direct method rather than the overhead of ssh.On receiving machine: nc -l 12345 | zfs recv mypool/fs at today and on sending machine: zfs send sourcepool/fs at today | nc othermachine.umich.edu 12345 You''ll need to build your own netcat, but this is fairly simple. If you run into trouble let me know and I''ll post an x86 package. Will
Could I trouble you for the x86 package? I don''t seem to have much in the way of software on this try-n-buy system... Thanks, Dave -----Original Message----- From: Will Murnane [mailto:will.murnane at gmail.com] Sent: Thursday, July 10, 2008 12:58 PM To: Glaser, David Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS send/receive questions On Thu, Jul 10, 2008 at 12:43, Glaser, David <dsglaser at umich.edu> wrote:> I guess what I was wondering if there was a direct method rather than the overhead of ssh.On receiving machine: nc -l 12345 | zfs recv mypool/fs at today and on sending machine: zfs send sourcepool/fs at today | nc othermachine.umich.edu 12345 You''ll need to build your own netcat, but this is fairly simple. If you run into trouble let me know and I''ll post an x86 package. Will
Will Murnane wrote:> On Thu, Jul 10, 2008 at 12:43, Glaser, David <dsglaser at umich.edu> wrote: > >> I guess what I was wondering if there was a direct method rather than the overhead of ssh. >> > On receiving machine: > nc -l 12345 | zfs recv mypool/fs at today > and on sending machine: > zfs send sourcepool/fs at today | nc othermachine.umich.edu 12345 > You''ll need to build your own netcat, but this is fairly simple. If > you run into trouble let me know and I''ll post an x86 package. > > Will >If you are running Nexenta you can also "apt-get install sunwnetcat"
On Thu, Jul 10, 2008 at 13:05, Glaser, David <dsglaser at umich.edu> wrote:> Could I trouble you for the x86 package? I don''t seem to have much in the way of software on this try-n-buy system...No problem. Packages are posted at http://will.incorrige.us/solaris-packages/ . You''ll need gettext and iconv as well as netcat, as it links against libiconv. Download the gzip files, decompress them with gzip -d, then pkgtrans $packagefile $tempdir and run pkgadd -d $tempdir. Files will be installed in the /usr/site hierarchy. The executable is called "netcat", not "nc", because that''s what it builds as by default. I believe I got all the dependencies, but if not I''ll be glad to post whatever is missing as well. If you''d rather have spec files and sources (which you can assemble with pkgbuild) than binaries, I can provide those instead. Will
Carson Gaspar
2008-Jul-10 18:40 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Moore, Joe wrote:> Carson Gaspar wrote: >> Darren J Moffat wrote: >>> $ pwd >>> /cube/builds/darrenm/bugs >>> $ zfs create -c 6724478 >>> >>> Why "-c" ? -c for "current directory" "-p" partial is >> already taken to >>> mean "create all non existing parents" and "-r" relative is >> already used >>> consistently as "recurse" in other zfs(1) commands (as well >> as lots of >>> other places). >> Why not "zfs create $PWD/6724478". Works today, traditional UNIX >> behaviour, no coding required. Unles you''re in some bizarroland shell >> (like csh?)... > > Because the zfs dataset mountpoint may not be the same as the zfs pool > name. This makes things a bit complicated for the initial request.The leading slash will be a problem with the current code. I forgot about that... make that ${PWD#/} (or change the code to ignore the leading slash...). That is, admittedly, more typing than a single character option, but not much. And yes, if your mount name and pool names don''t match, extra code would be required to determine the parent pool/fs of the path passed. But no more code than magic CWD goo... I really don''t like special case options whose sole purpose is to shorten command line length. -- Carson
Mike Gerdts
2008-Jul-10 19:59 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On Thu, Jul 10, 2008 at 1:40 PM, Carson Gaspar <carson at taltos.org> wrote:> Moore, Joe wrote: >> Because the zfs dataset mountpoint may not be the same as the zfs pool >> name. This makes things a bit complicated for the initial request. > > The leading slash will be a problem with the current code. I forgot > about that... make that ${PWD#/} (or change the code to ignore the > leading slash...). That is, admittedly, more typing than a single > character option, but not much.Most of the places where I use zfs the ${PWD#/} trick would not work. I have a pool of storage that has an arbitrary name then I use it in various places to match up with names I have traditionally used. That is, I have a pool named local that has: local/zones on /zones local/ws on /ws local/home on /export/home ... I think that if you take a look at a machine that uses zfsroot you will find very few, if any, datasets used directly at /rpool/<whatever>.> And yes, if your mount name and pool names don''t match, extra code would > be required to determine the parent pool/fs of the path passed. But no > more code than magic CWD goo... I really don''t like special case options > whose sole purpose is to shorten command line length.It''s not just shortening command line length. If a user has permissions to do things in his/her datasets, there should be no need for that user to know about the overall structure of the zpool. This is user-visible complexity that will turn into a long-term management problem as sysadmins split or merge pools, change pool naming schemes, reorganize dataset hierarchies, etc. -- Mike Gerdts http://mgerdts.blogspot.com/
Will Murnane wrote:> On Thu, Jul 10, 2008 at 12:43, Glaser, David <dsglaser at umich.edu> wrote: >> I guess what I was wondering if there was a direct method rather than the overhead of ssh. > On receiving machine: > nc -l 12345 | zfs recv mypool/fs at today > and on sending machine: > zfs send sourcepool/fs at today | nc othermachine.umich.edu 12345 > You''ll need to build your own netcat, but this is fairly simple. IfWhy ? Pathname: /usr/bin/nc Type: regular file Expected mode: 0555 Expected owner: root Expected group: bin Expected file size (bytes): 31428 Expected sum(1) of contents: 5207 Expected last modification: Jun 16 05:58:18 2008 Referenced by the following packages: SUNWnetcat Current status: installed -- Darren J Moffat
On Fri, Jul 11, 2008 at 05:23, Darren J Moffat <darrenm at opensolaris.org> wrote:> Why ? > Referenced by the following packages: > SUNWnetcatIs this in 10u5? Weird, it''s not on my media. Will
Will Murnane wrote:> On Fri, Jul 11, 2008 at 05:23, Darren J Moffat <darrenm at opensolaris.org> wrote: >> Why ? >> Referenced by the following packages: >> SUNWnetcat > > Is this in 10u5? Weird, it''s not on my media.No but this is an opensolaris.org alias not a Solaris 10 support forum. So the assumption unless people say otherwise is that you are running a recent build of SX:CE or OpenSolaris 2008.05 (including updates). -- Darren J Moffat
On Fri, Jul 11, 2008 at 11:44, Darren J Moffat <darrenm at opensolaris.org> wrote:> No but this is an opensolaris.org alias not a Solaris 10 support forum. So > the assumption unless people say otherwise is that you are running a recent > build of SX:CE or OpenSolaris 2008.05 (including updates).Luckily, the OP mentioned he''s running 10u5 in his first post ;) Will
Mike Gerdts
2009-Nov-25 21:19 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Is there still any interest in this? I''ve done a bit of hacking (then searched for this thread - I picked -P instead of -c)... $ zfs get -P compression,dedup /var NAME PROPERTY VALUE SOURCE rpool/ROOT/zfstest compression on inherited from rpool/ROOT rpool/ROOT/zfstest dedup off default $ pfexec zfs snapshot -P . at now Creating snapshot <rpool/export/home at now> Of course create/mkdir would make it into the eventual implementation as well. For those missing this thread in their mailboxes, the conversation is archived at http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html. http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html Mike On Thu, Jul 10, 2008 at 4:42 AM, Darren J Moffat <Darren.Moffat at sun.com> wrote:> I regularly create new zfs filesystems or snapshots and I find it > annoying that I have to type the full dataset name in all of those cases. > > I propose we allow zfs(1) to infer the part of the dataset name upto the > current working directory. ?For example: > > Today: > > $ zfs create cube/builds/darrenm/bugs/6724478 > > With this proposal: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs create 6724478 > > Both of these would result in a new dataset cube/builds/darrenm/6724478 > > This will need some careful though about how to deal with cases like this: > > $ pwd > /cube/builds/ > $ zfs create 6724478/test > > What should that do ? should it create cube/builds/6724478 and > cube/builds/6724478/test ? ?Or should it fail ? ?-p already provides > some capbilities in this area. > > Maybe the easiest way out of the ambiquity is to add a flag to zfs > create for the partial dataset name eg: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs create -c 6724478 > > Why "-c" ? ?-c for "current directory" ?"-p" partial is already taken to > mean "create all non existing parents" and "-r" relative is already used > consistently as "recurse" in other zfs(1) commands (as well as lots of > other places). > > Alternately: > > $ pwd > /cube/builds/darrenm/bugs > $ zfs mkdir 6724478 > > Which would act like mkdir does (including allowing a -p and -m flag > with the same meaning as mkdir(1)) but creates datasets instead of > directories. > > Thoughts ? ?Is this useful for anyone else ? ?My above examples are some > of the shorter dataset names I use, ones in my home directory can be > even deeper. > > -- > Darren J Moffat > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Mike Gerdts http://mgerdts.blogspot.com/
Darren J Moffat
2009-Nov-26 09:41 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
Mike Gerdts wrote:> Is there still any interest in this? I''ve done a bit of hacking (then > searched for this thread - I picked -P instead of -c)... > > $ zfs get -P compression,dedup /var > NAME PROPERTY VALUE SOURCE > rpool/ROOT/zfstest compression on inherited from rpool/ROOT > rpool/ROOT/zfstest dedup off default > > $ pfexec zfs snapshot -P . at now > Creating snapshot <rpool/export/home at now> > > Of course create/mkdir would make it into the eventual implementation > as well. For those missing this thread in their mailboxes, the > conversation is archived at > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html. > > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.htmlI''m certainly still willing to sponsor the ARC case and to do codereview for you. Once you are ready please send out a proposal including draft (ASCII is fine) man page changes. Once there is consensus from the core ZFS developer team I''ll submit it to ARC for you. -- Darren J Moffat
Menno Lageman
2009-Nov-26 14:11 UTC
[zfs-discuss] proposal partial/relative paths for zfs(1)
On 11/25/09 22:19, Mike Gerdts wrote:> Is there still any interest in this? I''ve done a bit of hacking (then > searched for this thread - I picked -P instead of -c)... > > $ zfs get -P compression,dedup /var > NAME PROPERTY VALUE SOURCE > rpool/ROOT/zfstest compression on inherited from rpool/ROOT > rpool/ROOT/zfstest dedup off default > > $ pfexec zfs snapshot -P . at now > Creating snapshot <rpool/export/home at now> > > Of course create/mkdir would make it into the eventual implementation > as well. For those missing this thread in their mailboxes, the > conversation is archived at > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html. > > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html > > Mike >Yes, I am also interested in this[1], but I won''t have time to work on this in the foreseeable future, so please have at it. Menno [1] http://mail.opensolaris.org/pipermail/zfs-discuss/2009-August/030720.html -- Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno