Displaying 15 results from an estimated 15 matches for "sharetab".
2007 Oct 24
1
S10u4 in kernel sharetab
There was a log of talk about ZFS and NFS shares being a problem when there was a large number of filesystems. There was a fix that in part included an in kernel sharetab (I think :) Does anyone know if this has made it into S10u4?
Thanks,
BlueUmp
This message posted from opensolaris.org
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker.
>From what little I currently understand, the general
2008 Feb 17
12
can''t share a zfs
-bash-3.2$ zfs share tank
cannot share ''tank'': share(1M) failed
-bash-3.2$
how do i figure out what''s wrong?
This message posted from opensolaris.org
2006 Aug 22
1
Interesting zfs destroy failure
Saw this while writing a script today -- while debugging the script, I was ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so, my cleanup script
failed to zfs destroy the new filesystem:
root at kronos:/ # zfs destroy -f raid/www/user-testuser
cannot unshare ''raid/www/user-testuser'': /raid/www/user-testuser: not shared
root
2008 Dec 23
1
Upgrade from UFS Sol 10u5 to ZFS Sol 10u6/OS 2008.11[SEC=UNCLASSIFIED]
Hi ZFS gods,
I have a x4500 I wish to upgrade from a SVM UFS Sol 10u5 to a ZFS
rpool 10u6 or Opensolaris.
Since I know (via backing up my sharetab) what shares I need to have
(all nfs share - no cifs on this 4500 - YAY) and have organised
downtime for this server, would it be easier for me to go to Solaris
10u6 (or opensolaris) by just installing from scratch and re-importing
the ZPOOL (of course after exporting it before installing t...
2007 Jun 03
4
/dev/random problem after moving to zfs boot:
I have one thing happening now at boot which must have happened during the migration to zfs boot. I get an error message about /dev/random: "No randomness provider enabled for /dev/random. Use cryptoadm to provide one." Does anyone know how to fix this?
Another thing: Is it possible to upgrade to a higher build when using zfs boot? Is this what LiveUpgrade does? And is there a step by
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
...ult
tank aclmode groupmask default
tank aclinherit secure default
tank canmount on default
tank shareiscsi off default
tank xattr on default
/etc/dfs/dfstab is empty
/etc/dfs/sharetab:
/tank - nfs rw
Now, when I try to mount this share from multiple boxes, I get ''Permission denied'' when I try to create/modify any file.
Mounting from a Linux box:
/etc/fstab
mosasaur:/tank /tank nfs4 rw,user 0 0
mount /tank
ls -al tank == drwxr...
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
...ult
tank aclmode groupmask default
tank aclinherit secure default
tank canmount on default
tank shareiscsi off default
tank xattr on default
/etc/dfs/dfstab is empty
/etc/dfs/sharetab:
/tank - nfs rw
Now, when I try to mount this share from multiple boxes, I get ''Permission denied'' when I try to create/modify any file.
Mounting from a Linux box:
/etc/fstab
mosasaur:/tank /tank nfs4 rw,user 0 0
mount /tank
ls -al tank == drwxr...
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2008 Nov 12
0
ZFS sens state
...= 3
6074: fstat64(3, 0x08047AD0) = 0
6074: stat64("/dev/pts/0", 0x08047BE0) = 0
6074: open("/etc/mnttab", O_RDONLY) = 4
6074: fstat64(4, 0x08047AA0) = 0
6074: open("/etc/dfs/sharetab", O_RDONLY) = 5
6074: fstat64(5, 0x08047AA0) = 0
6074: open("/etc/mnttab", O_RDONLY) = 6
6074: fstat64(6, 0x08047AB0) = 0
6074: sysconfig(_CONFIG_PAGESIZE) = 4096
6074: ioctl...
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
...0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.2G 1.1M 2.2G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
8.9G 5.9G 2.9G 67% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 2.2G 40K 2.2G 1% /tmp
swap 2.2G 40K 2.2G 1% /var/run
/dev/dsk/c1t0d0s7...
2008 Apr 03
3
[Bug 971] New: zfs key -l fails after unloading (keyscope=dataset)
http://defect.opensolaris.org/bz/show_bug.cgi?id=971
Summary: zfs key -l fails after unloading (keyscope=dataset)
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2011 May 03
4
multipl disk failures cause zpool hang
...00000000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEC30000
memcntl(0xFE620000, 3576, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
open("/dev/zfs", O_RDWR) = 3
open("/etc/mnttab", O_RDONLY) = 4
open("/etc/dfs/sharetab", O_RDONLY) = 5
stat64("/lib/libavl.so.1", 0x080431A8) = 0
resolvepath("/lib/libavl.so.1", "/lib/libavl.so.1", 1023) = 16
open("/lib/libavl.so.1", O_RDONLY) = 6
mmapobj(6, MMOBJ_INTERPRET, 0xFEC305E0, 0x08043214, 0x0...
2007 Sep 19
53
enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS
We are looking for a replacement enterprise file system to handle storage
needs for our campus. For the past 10 years, we have been happily using DFS
(the distributed file system component of DCE), but unfortunately IBM
killed off that product and we have been running without support for over a
year now. We have looked at a variety of possible options, none of which
have proven fruitful. We are