Ian Brown
2006-Dec-06 08:03 UTC
[zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie
Hello, I try to create a zfs file system according to "Creating a Basic ZFS File System" section of "Creating a Basic ZFS File System" document of SUN. The problem is that the device has a ufs filesystem the partiotion I am trying to work with; it is in fact empty and does not contain any file which I need. So: zpool create tank /dev/dsk/c1d0s6 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c1d0s6 contains a ufs filesystem. /dev/dsk/c1d0s6 is normally mounted on /MyPartition according to /etc/vfstab. Please remove this entry to use this device. So I removed this entry from /etc/vfstab and also unmounted the /MyPartition partition. than I tried: zpool create -f tank /dev/dsk/c1d0s6 internal error: No such device Abort (core dumped) But: "zpool list" gives: NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 1.94G 51.5K 1.94G 0% ONLINE - is there any reason for this "internal error: No such device" ? Is there something wrong here which I should do in a different way ? from "man zpool create -f" The command verifies that each device specified is accessible and not currently in use by another subsys- tem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by ZFS. Other uses, such as having a preexisting UFS file system, can be overridden with the -f option. ... ... -f Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. Ian This message posted from opensolaris.org
Wee Yeh Tan
2006-Dec-06 14:00 UTC
[zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie
Ian, The first error is correct in that zpool-create will not, unless forced, create a file system if it knows that another filesystem presides in the target vdev. The second error was caused by your removal of the slice. What I find discerning is that the zpool created. Can you provide the result of "zpool status" and list of the disk partition table? If it''s indeed carved from c1d0s6, can you destroy the pool and see if the same creation sequence indeed creates the zpool? -- Just me, Wire ... On 12/6/06, Ian Brown <ianbrn at gmail.com> wrote:> Hello, > I try to create a zfs file system according to > "Creating a Basic ZFS File System" section of > "Creating a Basic ZFS File System" document of SUN. > > The problem is that the device has a ufs filesystem the partiotion > I am trying to work with; it is in fact empty and does not contain any > file which I need. > > So: > > zpool create tank /dev/dsk/c1d0s6 > invalid vdev specification > use ''-f'' to override the following errors: > /dev/dsk/c1d0s6 contains a ufs filesystem. > /dev/dsk/c1d0s6 is normally mounted on /MyPartition according to > /etc/vfstab. Please remove this entry to use this device. > > So I removed this entry from /etc/vfstab and also unmounted the > /MyPartition partition. > > than I tried: > > zpool create -f tank /dev/dsk/c1d0s6 > internal error: No such device > Abort (core dumped) > > But: > "zpool list" gives: > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > tank 1.94G 51.5K 1.94G 0% ONLINE - > > is there any reason for this "internal error: No such device" ? > Is there something wrong here which I should do in a different way ? > > > from "man zpool create -f" > The command verifies that each device specified is > accessible and not currently in use by another subsys- > tem. There are some uses, such as being currently > mounted, or specified as the dedicated dump device, that > prevents a device from ever being used by ZFS. Other > uses, such as having a preexisting UFS file system, can > be overridden with the -f option. > ... > ... > -f > > Forces use of vdevs, even if they appear in use or > specify a conflicting replication level. Not all > devices can be overridden in this manner. > > Ian > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
We have two aging Netapp filers and can''t afford to buy new Netapp gear, so we''ve been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very nicely). Mirroring is an alternative, but when you''re on a tight budget losing N/2 disk capacity is painful. - The default scheme of one filesystem per user runs into problems with linux NFS clients; on one linux system, with 1300 logins, we already have to do symlinks with amd because linux systems can''t mount more than about 255 filesystems at once. We can of course just have one filesystem exported, and make /home/student a subdirectory of that, but then we run into problems with quotas -- and on an undergraduate fileserver, quotas aren''t optional! Neither of these problems are necessarily showstoppers, but both make the transition more difficult. Any progress that could be made with them would help sites like us make the switch sooner.
On Wed, 6 Dec 2006, Jim Davis wrote:> We have two aging Netapp filers and can''t afford to buy new Netapp gear, > so we''ve been looking with a lot of interest at building NFS fileservers > running ZFS as a possible future approach. Two issues have come up in the > discussion > > - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very > nicely). Mirroring is an alternative, but when you''re on a tight budget > losing N/2 disk capacity is painful. > > - The default scheme of one filesystem per user runs into problems with > linux NFS clients; on one linux system, with 1300 logins, we already have > to do symlinks with amd because linux systems can''t mount more than about > 255 filesystems at once. We can of course just have one filesystem > exported, and make /home/student a subdirectory of that, but then we run > into problems with quotas -- and on an undergraduate fileserver, quotas > aren''t optional! > > Neither of these problems are necessarily showstoppers, but both make the > transition more difficult. Any progress that could be made with them > would help sites like us make the switch sooner.The showstopper might be performance - since the Netapp has nonvolatile memory - which greatly accelerates NFS operations. A good strategy is to build a ZFS test system and determine if it provides the NFS performance you expect in your environment. Remember that ZFS "likes" inexpensive SATA disk drives - so a test system will be kind to your budget and the hardware is re-usable when you decide to deploy ZFS. And you may very well find other, unintended uses for that "test" system. Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006
> On Wed, 6 Dec 2006, Jim Davis wrote: > > We have two aging Netapp filers and can''t afford to buy new Netapp gear, > so we''ve been looking with a lot of interest at building NFS fileservers > running ZFS as a possible future approach. Two issues have come up in the > discussion > > - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very > nicely). Mirroring is an alternative, but when you''re on a tight budget > losing N/2 disk capacity is painful.You can add more disks to a pool that is in raid-z you just can''t add disks to the existing raid-z vdev. The following config was done in two steps: $ zpool status pool: cube state: ONLINE scrub: scrub completed with 0 errors on Mon Dec 4 03:52:18 2006 config: NAME STATE READ WRITE CKSUM cube ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c5t1d0 ONLINE 0 0 0 c5t2d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c5t4d0 ONLINE 0 0 0 c5t5d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c5t8d0 ONLINE 0 0 0 c5t9d0 ONLINE 0 0 0 c5t10d0 ONLINE 0 0 0 c5t11d0 ONLINE 0 0 0 c5t12d0 ONLINE 0 0 0 c5t13d0 ONLINE 0 0 0 The targets t0 through t5 included were added initially, many days later the targets t8 through t13 were added. The fact that these are all the same controller isn''t relevant. This is actually what you want with raid-z anyway, in may case above it wouldn''t be good for performance to have 12 disks in the top level raid-z.> - The default scheme of one filesystem per user runs into problems with > linux NFS clients; on one linux system, with 1300 logins, we already have > to do symlinks with amd because linux systems can''t mount more than about > 255 filesystems at once. We can of course just have one filesystem > exported, and make /home/student a subdirectory of that, but then we run > into problems with quotas -- and on an undergraduate fileserver, quotas > aren''t optional!So how can OpenSolaris help you with a Linux kernel restriction on the number of mounts ? Hey I know, get rid of the Linux boxes and replace them with OpenSolaris based ones ;-) Seriously, what are you expecting OpenSolaris and ZFS/NFS in particular to be able to do about a restriction in Linux ? -- Darren J Moffat
Torrey McMahon
2006-Dec-06 15:34 UTC
[zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie
Still ... I don''t think a core file is appropriate. Sounds like a bug is in order if one doesn''t already exist. ("zpool dumps core when missing devices are used" perhaps?) Wee Yeh Tan wrote:> Ian, > > The first error is correct in that zpool-create will not, unless > forced, create a file system if it knows that another filesystem > presides in the target vdev. > > The second error was caused by your removal of the slice. > > What I find discerning is that the zpool created. > Can you provide the result of "zpool status" and list of the disk > partition table? If it''s indeed carved from c1d0s6, can you destroy > the pool and see if the same creation sequence indeed creates the > zpool? > >
>- The default scheme of one filesystem per user runs into problems with >linux NFS clients; on one linux system, with 1300 logins, we already have >to do symlinks with amd because linux systems can''t mount more than about >255 filesystems at once. We can of course just have one filesystem >exported, and make /home/student a subdirectory of that, but then we run >into problems with quotas -- and on an undergraduate fileserver, quotas >aren''t optional!He, you have the Linux source so fix that :-) Or just run Solaris on the NFS clients :-)... You can grow a RAID-Z pool but only by adding another set of disks, not one disk at a time. Casper
> You can add more disks to a pool that is in raid-z you just can''t > add disks to the existing raid-z vdev.cd /usr/tmp mkfile -n 100m 1 2 3 4 5 6 7 8 9 10 zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3 zpool status t zfs list t zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr/tmp/7 zpool status t zfs list t zpool add t /usr/tmp/8 spare /usr/tmp/9 zpool status t zfs list t zpool attach t /usr/tmp/8 /usr/tmp/10 zpool status t zfs list t sleep 10 rm /usr/tmp/5 zpool scrub t sleep 3 zpool status t mkfile -n 100m 5 zpool replace t /usr/tmp/5 zpool status t sleep 10 zpool status t zpool offline t /usr/tmp/1 mkfile -n 200m 1 zpool replace t /usr/tmp/1 zpool status t sleep 10 zpool status t zpool offline t /usr/tmp/2 mkfile -n 200m 2 zpool replace t /usr/tmp/2 zfs list t sleep 10 zpool offline t /usr/tmp/3 mkfile -n 200m 3 zpool replace t /usr/tmp/3 sleep 10 zfs list t zpool destroy t rm 1 2 3 4 5 6 7 8 9 10
Jim Davis wrote:> We have two aging Netapp filers and can''t afford to buy new Netapp gear, > so we''ve been looking with a lot of interest at building NFS fileservers > running ZFS as a possible future approach. Two issues have come up in > the discussion > > - Adding new disks to a RAID-Z pool (Netapps handle adding new disks > very nicely). Mirroring is an alternative, but when you''re on a tight > budget losing N/2 disk capacity is painful.What about adding a whole new RAID-Z vdev and dynamicly stripe across the RAID-Zs? Your capacity and performance will go up with each RAID-Z vdev you add. Such as: # zpool create swim raidz /var/tmp/dev1 /var/tmp/dev2 /var/tmp/dev3 # zpool status pool: swim state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM swim ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev1 ONLINE 0 0 0 /var/tmp/dev2 ONLINE 0 0 0 /var/tmp/dev3 ONLINE 0 0 0 errors: No known data errors # zpool add swim raidz /var/tmp/dev4 /var/tmp/dev5 /var/tmp/dev6 # zpool status pool: swim state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM swim ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev1 ONLINE 0 0 0 /var/tmp/dev2 ONLINE 0 0 0 /var/tmp/dev3 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev4 ONLINE 0 0 0 /var/tmp/dev5 ONLINE 0 0 0 /var/tmp/dev6 ONLINE 0 0 0 errors: No known data errors # # zpool add swim raidz /var/tmp/dev7 /var/tmp/dev8 /var/tmp/dev9 # zpool status pool: swim state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM swim ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev1 ONLINE 0 0 0 /var/tmp/dev2 ONLINE 0 0 0 /var/tmp/dev3 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev4 ONLINE 0 0 0 /var/tmp/dev5 ONLINE 0 0 0 /var/tmp/dev6 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /var/tmp/dev7 ONLINE 0 0 0 /var/tmp/dev8 ONLINE 0 0 0 /var/tmp/dev9 ONLINE 0 0 0 errors: No known data errors #> > - The default scheme of one filesystem per user runs into problems with > linux NFS clients; on one linux system, with 1300 logins, we already > have to do symlinks with amd because linux systems can''t mount more than > about 255 filesystems at once. We can of course just have one > filesystem exported, and make /home/student a subdirectory of that, but > then we run into problems with quotas -- and on an undergraduate > fileserver, quotas aren''t optional!Have you tried using the automounter as suggested by the linux faq?: http://nfs.sourceforge.net/#section_b Look for section "B3. Why can''t I mount more than 255 NFS file systems on my client? Why is it sometimes even less than 255?". Let us know if that works or doesn''t work. Also, ask for reasoning/schedule on when they are going to fix this on the linux NFS alias (i believe its nfs at lists.sourceforge.net ). Trond should be able to help you. If going to OpenSolaris clients is not an option, then i would be curious to know why. eric> > Neither of these problems are necessarily showstoppers, but both make > the transition more difficult. Any progress that could be made with > them would help sites like us make the switch sooner. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
eric kustarz wrote:> > What about adding a whole new RAID-Z vdev and dynamicly stripe across > the RAID-Zs? Your capacity and performance will go up with each RAID-Z > vdev you add.Thanks, that''s an interesting suggestion.> > Have you tried using the automounter as suggested by the linux faq?: > http://nfs.sourceforge.net/#section_bYes. On our undergrad timesharing system (~1300 logins) we actually hit that limit with a standard automounting scheme. So now we make static mounts of the Netapp /home space and then use amd to make symlinks to the home directories. Ugly, but it works.> > Also, ask for reasoning/schedule on when they are going to fix this on > the linux NFS alias (i believe its nfs at lists.sourceforge.net ). Trond > should be able to help you.It''s item 9 (last) on their "medium priority" list, according to http://www.linux-nfs.org/priorities.html. That doesn''t sound like a fix is coming soon.> If going to OpenSolaris clients is not an > option, then i would be curious to know why.Ah, well... it was a Solaris system for many years. And we were mostly a Solaris shop for many years. Then Sun hardware got too pricey, and fast Intel systems got cheap but at the time Solaris support for them lagged and Linux matured and... and now Linux is entrenched. It''s a story other departments here could tell. And at other universities too I''ll bet. So the reality is we have to make whatever we run on our servers play well with Linux clients.
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:> We have two aging Netapp filers and can''t afford to buy new Netapp gear, > so we''ve been looking with a lot of interest at building NFS fileservers > running ZFS as a possible future approach. Two issues have come up in the > discussion > > - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very > nicely). Mirroring is an alternative, but when you''re on a tight budget > losing N/2 disk capacity is painful. > > - The default scheme of one filesystem per user runs into problems with > linux NFS clients; on one linux system, with 1300 logins, we already have > to do symlinks with amd because linux systems can''t mount more than about > 255 filesystems at once. We can of course just have one filesystem > exported, and make /home/student a subdirectory of that, but then we run > into problems with quotas -- and on an undergraduate fileserver, quotas > aren''t optional! >well, if the mount limitation is imposed by the linux kernel you might consider trying running linux in zone on solaris (via BrandZ). Since BrandZ allows you to execute linux programs on a solaris kernel you shoudn''t have a problem with limits imposed by the linux kernel. brandz currently ships in an solaris express (or solaris express community release) build snv_49 or later. you can find more info on brandz here: http://opensolaris.org/os/community/brandz/ ed
Edward Pilatowicz wrote:> On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote: >> We have two aging Netapp filers and can''t afford to buy new Netapp gear, >> so we''ve been looking with a lot of interest at building NFS fileservers >> running ZFS as a possible future approach. Two issues have come up in the >> discussion >> >> - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very >> nicely). Mirroring is an alternative, but when you''re on a tight budget >> losing N/2 disk capacity is painful. >> >> - The default scheme of one filesystem per user runs into problems with >> linux NFS clients; on one linux system, with 1300 logins, we already have >> to do symlinks with amd because linux systems can''t mount more than about >> 255 filesystems at once. We can of course just have one filesystem >> exported, and make /home/student a subdirectory of that, but then we run >> into problems with quotas -- and on an undergraduate fileserver, quotas >> aren''t optional! >> > > well, if the mount limitation is imposed by the linux kernel you might > consider trying running linux in zone on solaris (via BrandZ). Since > BrandZ allows you to execute linux programs on a solaris kernel you > shoudn''t have a problem with limits imposed by the linux kernel. > brandz currently ships in an solaris express (or solaris express > community release) build snv_49 or later.Another alternative is to pick an OpenSolaris based distribution that "looks and feels" more like Linux. Nexenta might do that for you. -- Darren J Moffat
Jim Davis wrote:> eric kustarz wrote: > >> >> What about adding a whole new RAID-Z vdev and dynamicly stripe across >> the RAID-Zs? Your capacity and performance will go up with each >> RAID-Z vdev you add. > > > Thanks, that''s an interesting suggestion. > >> >> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard automounting scheme. So now we make static > mounts of the Netapp /home space and then use amd to make symlinks to > the home directories. Ugly, but it works.Ug indeed.> >> >> Also, ask for reasoning/schedule on when they are going to fix this on >> the linux NFS alias (i believe its nfs at lists.sourceforge.net ). Trond >> should be able to help you. > > > It''s item 9 (last) on their "medium priority" list, according to > http://www.linux-nfs.org/priorities.html. That doesn''t sound like a fix > is coming soon.Hmm, looks like that list is a little out of date, i''ll ask trond to update it.> > >> If going to OpenSolaris clients is not an option, then i would be >> curious to know why. > > > Ah, well... it was a Solaris system for many years. And we were mostly > a Solaris shop for many years. Then Sun hardware got too pricey, and > fast Intel systems got cheap but at the time Solaris support for them > lagged and Linux matured and... and now Linux is entrenched. It''s a > story other departments here could tell. And at other universities too > I''ll bet. So the reality is we have to make whatever we run on our > servers play well with Linux clients.Ok, can i ask a favor then? Could you try one OpenSolaris client (should work fine on the existing hardware you have) and let us know if that works better/worse for you? And as Ed just mentioned, i would be really interested if BrandZ fits your needs (then you could have one+ zone with a linux userland and opensolaris kernel). eric
Ian Brown
2006-Dec-07 06:50 UTC
[zfs-discuss] Re: Creating zfs filesystem on a partition with ufs - Newbie
Hello, Thanks. Here is the needed info: zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c1d0s6 ONLINE 0 0 0 errors: No known data errors "df -h" returns: Filesystem Size Used Avail Use% Mounted on /dev/dsk/c1d0s0 70G 59G 11G 85% / swap 2.3G 788K 2.3G 1% /etc/svc/volatile /usr/lib/libc/libc_hwcap1.so.1 70G 59G 11G 85% /lib/libc.so.1 swap 2.3G 20K 2.3G 1% /tmp swap 2.3G 32K 2.3G 1% /var/run /dev/dsk/c1d0s7 251M 1.1M 225M 1% /export/home prtvtoc /dev/dsk/c1d0s0 returns: * /dev/dsk/c1d0s0 partition map * * Dimensions: * 512 bytes/sector * 63 sectors/track * 255 tracks/cylinder * 16065 sectors/cylinder * 9728 cylinders * 9726 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 8787555 147460635 156248189 / 1 3 01 48195 4096575 4144769 2 5 00 0 156248190 156248189 6 0 00 4690980 4096575 8787554 7 8 00 4144770 546210 4690979 /export/home 8 1 01 0 16065 16064 9 9 01 16065 32130 48194 I cannot destroy this pool; "zpool destroy tank" returns: internal error: No such device Abort (core dumped) Regards, Ian This message posted from opensolaris.org
Jim Davis wrote:> eric kustarz wrote: >> What about adding a whole new RAID-Z vdev and dynamicly stripe across >> the RAID-Zs? Your capacity and performance will go up with each >> RAID-Z vdev you add. > > Thanks, that''s an interesting suggestion.This has the benefit of allowing you to grow into your storage. Also, a raid-z 3-vdev set has better reliability than a 4-vdev set. The performance will be about the same, so if you have 12 vdevs, four 3-vdev sets will perform better and be more reliable than three 4-vdev sets. The available space will be smaller, there is no free lunch. -- richard>> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard automounting scheme. So now we make static > mounts of the Netapp /home space and then use amd to make symlinks to > the home directories. Ugly, but it works.<geezer mode> Solaris folks shouldn''t laugh to hard, SunOS 4 had an artificial limit for the number of client mount points too -- a bug which only read 8kBytes from the mnttab; if mnttab overflowed you hung. Fixed many, many years ago and now mnttab is not actually a file at all ;-) </geezer mode> -- richard
Hello Jim, Wednesday, December 6, 2006, 3:28:53 PM, you wrote: JD> We have two aging Netapp filers and can''t afford to buy new Netapp gear, JD> so we''ve been looking with a lot of interest at building NFS fileservers JD> running ZFS as a possible future approach. Two issues have come up in the JD> discussion JD> - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very JD> nicely). Mirroring is an alternative, but when you''re on a tight budget JD> losing N/2 disk capacity is painful. Actually you can add another raid-z group to the pool. I belive it''s the same what NetApp is doing (instead of actually growing raid group). JD> - The default scheme of one filesystem per user runs into problems with JD> linux NFS clients; on one linux system, with 1300 logins, we already have JD> to do symlinks with amd because linux systems can''t mount more than about JD> 255 filesystems at once. We can of course just have one filesystem JD> exported, and make /home/student a subdirectory of that, but then we run JD> into problems with quotas -- and on an undergraduate fileserver, quotas JD> aren''t optional! It can with 2.6 kernels. However there''re other problems we we ended-up with limit at around 700. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
NetApp can actually grow their RAID groups, but they recommend adding an entire RAID group at once instead. If you add a disk to a RAID group on NetApp, I believe you need to manually start a reallocate process to balance data across the disks. This message posted from opensolaris.org
Jim Davis wrote:>> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard automounting scheme. So now we make static > mounts of the Netapp /home space and then use amd to make symlinks to > the home directories. Ugly, but it works.This is how we''ve always done it, but we use amd (am-utils) to manage two maps, a filesystem map and a homes map. The homes map is of all type:=link, so amd handles the link creation for us, plus we only have a handful of mounts on any system. It looks like if each user has a ZFS quota-ed home directory which acts as its own little filesystem, we won''t be able to do this anymore, as we''ll have to export and mount each user directory separately. Is this the case, or is there a way to export and mount a volume containing zfs quota-ed directories, i.e., have the quota-ed subdirs not necessarily act like they''re separate filesystems? Jim
On 12/12/06, James F. Hranicky <jfh at cise.ufl.edu> wrote:> Jim Davis wrote: > > >> Have you tried using the automounter as suggested by the linux faq?: > >> http://nfs.sourceforge.net/#section_b > > > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > > that limit with a standard automounting scheme. So now we make static > > mounts of the Netapp /home space and then use amd to make symlinks to > > the home directories. Ugly, but it works. > > This is how we''ve always done it, but we use amd (am-utils) to manage two > maps, a filesystem map and a homes map. The homes map is of all type:=link, > so amd handles the link creation for us, plus we only have a handful of > mounts on any system. > > It looks like if each user has a ZFS quota-ed home directory which acts as > its own little filesystem, we won''t be able to do this anymore, as we''ll have > to export and mount each user directory separately. Is this the case, or is > there a way to export and mount a volume containing zfs quota-ed directories, > i.e., have the quota-ed subdirs not necessarily act like they''re separate > filesystems? >This is definitely a feature I''d love to see, whereby one can share the filesystem at a higher point in the tree (aka /pool/a/b, sharing /pool/a, but have "b" as its own filesystem). I know this breaks some of the sharing, but I''d love to have clients be able to mount /pool/a and by way of that see b as well and not have that treated as a separate share.> Jim > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
> NetApp can actually grow their RAID groups, but they recommend adding > an entire RAID group at once instead. If you add a disk to a RAID > group on NetApp, I believe you need to manually start a reallocate > process to balance data across the disks.There''s no reallocation process that I''m aware of. Obviously adding a single column to a pretty full volume prevents you from doing the most optimal (full-stripe) writes. But since the existing parity disk covers the new column, you do have full availability of the new space. That''s a different story with raidz. Hopefully you don''t wait until the raid group is full before adding disks, and the blocks sort themselves out over time. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >