similar to: can''t share a zfs

Displaying 20 results from an estimated 3000 matches similar to: "can''t share a zfs"

2006 Oct 24
2
zfs set sharenfs=on
I started sharing out zfs filesystems via NFS last week using sharenfs=on. That seems to work fine until I reboot. Turned out the NFS server wasn''t enabled - I had to enable nfs/server, nfs/lockmgr and nfs/status manually. This is a stock SXCR b49 (ZFS root) install - don''t think I''d changed anything much. Shouldn''t a ZFS share be permanently enabling NFS?
2008 Jul 12
2
sharenfs=off, but still being shared?
I noticed an oddity on my 2008.05 box today. Created a new zfs file system that I was planning to nfs share out to an old FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of others shared too: -bash-3.2# dfshares -F nfs RESOURCE SERVER ACCESS TRANSPORT reaver:/store/movies reaver - - reaver:/export
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2009 Nov 26
5
rquota didnot show userquota (Solaris 10)
Hi, we have a new fileserver running on X4275 hardware with Solaris 10U8. On this fileserver we created one test dir with quota and mounted these on another Solaris 10 system. Here the quota command didnot show the used quota. Does this feature only work with OpenSolaris or is it intended to work on Solaris 10? Here what we did on the server: # zfs create -o mountpoint=/export/home2
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker. >From what little I currently understand, the general
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem. The client is Solaris 9 U7. I can mount the filesytem just fine but I am unable to write to it. showmount -e shows my mount is set for everyone. the dfstab file has option rw set. So what gives? Phillip -- This message posted from opensolaris.org
2008 Apr 03
3
[Bug 971] New: zfs key -l fails after unloading (keyscope=dataset)
http://defect.opensolaris.org/bz/show_bug.cgi?id=971 Summary: zfs key -l fails after unloading (keyscope=dataset) Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2007 Jan 30
3
Export ZFS over NFS ?
I''ve got my first server deployment with ZFS. Consolidating a pair of other file servers that used to have a dozen or so NFS exports in /etc/dfs/dfstab similar to; /export/solaris/images /export/tools /export/ws ..... and so on.... For the new server, I have one large zfs pool; -bash-3.00# df -hl bigpool 16T 1.5T 15T 10% /export that I am starting to
2010 Jan 17
3
I can''t seem to get the pool to export...
root at nas:~# zpool export -f raid cannot export ''raid'': pool is busy I''ve disabled all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that.
2005 Nov 20
11
NFS question (and Best Practices)
I saw in another post that a best practices doc will be coming, but I figured I would try to get this working. I''m trying to understand why zfs uses so many "zfs create" so I can use it better. What makes sense is that each zfs fs can have it''s own options (compression, nfs, atime, quota, etc). I really love this because it is so tuneable -- compression on these
2006 Aug 22
1
Interesting zfs destroy failure
Saw this while writing a script today -- while debugging the script, I was ctrl-c-ing it a lot rather than wait for the zfs create / zfs set commands to complete. After doing so, my cleanup script failed to zfs destroy the new filesystem: root at kronos:/ # zfs destroy -f raid/www/user-testuser cannot unshare ''raid/www/user-testuser'': /raid/www/user-testuser: not shared root
2010 Dec 18
10
a single nfs file system shared out twice with different permissions
I am trying to configure a system where I have two different NFS shares which point to the same directory. The idea is if you come in via one path, you will have read-only access and can''t delete any files, if you come in the 2nd path, then you will have read/write access. For example, create the read/write nfs share: zfs create tank/snapshots zfs set sharenfs=on tank/snapshots root
2007 Jun 26
2
NFS, nested ZFS filesystems and ownership
Hello, I''m sure there is a simple solution, but I am unable to figure this one out. Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2, and I set sharenfs=on for tank/fs (child filesystems are inheriting it as well), and I chown user:group /tank/fs, /tank/fs/fs1 and /tank/fs/fs2, I see: ls -la /tank/fs user:group . user:group fs1 user:group fs2 user:group some_other_file If I mount
2009 Aug 14
4
order bug, legacy mount and nfs sharing
Hi, I''ve encountered this bug: http://www.opensolaris.org/jive/thread.jspa?threadID=108316&tstart=30 and to obviate to the problem I''m using legacy mounts. Now the system boot without problems, but nfs server doesn''t start because couldn''t find any share. So I''ve disabled nfs with zfs set sharenfs=off on my zfs filesystems and tried to use the share
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2009 Mar 09
3
cannot mount ''/export'' directory is not empty
Hello, I am desperate. Today I realized that my OS 108 doesn''t want to boot. I have no idea what I screwed up. I upgraded on 108 last week without any problems. Here is where I''m stuck: Reading ZFS config: done. Mounting ZFS filesystems: (1/17) cannot mount ''/export'': directory is not empty (17/17) $ svcs -x svc:/system/filesystem/local:default (local file
2007 Apr 19
9
ZFS disables nfs/server on a host
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the extra space of the internal ide disk. There''s just the one fs and it is shared with the sharenfs property. When this system reboots nfs/server ends up getting disabled and this is the error from the SMF logs: [ Apr 16 08:41:22 Executing start method ("/lib/svc/method/nfs-server start") ] [ Apr 16
2007 Jun 03
4
/dev/random problem after moving to zfs boot:
I have one thing happening now at boot which must have happened during the migration to zfs boot. I get an error message about /dev/random: "No randomness provider enabled for /dev/random. Use cryptoadm to provide one." Does anyone know how to fix this? Another thing: Is it possible to upgrade to a higher build when using zfs boot? Is this what LiveUpgrade does? And is there a step by