search for: refquota

Displaying 20 results from an estimated 27 matches for "refquota".

2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
...FS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around on mailing lists, and concluded that the ''refquota'' property would solve this. With some trepidation, we decided at work that we would ignore the problem and wait for 10u6, at which point we would put the value of the quota property in the refquota property, and set quota=none. We did this a week or so ago, and we''re still havin...
2009 Jan 16
2
Problem setting quotas on a zfs pool
Solaris 10 5/08 Customer migrated to a new emc array with a snap shot and did a send and receive. He is now trying to set quotas on the zfs file system and getting the following error. [root at osprey /] # zfs set quota=800g target/u05 cannot set property for ''target/u05'': size is less than current used or reserved space [root at osprey /] # zfs list -o
2008 Mar 20
5
Snapshots silently eating user quota
All, I assume this issue is pretty old given the time ZFS has been around. I have tried searching the list but could not get understand the structure of how ZFS actually takes snapshot space into account. I have a user walter on whom I try to do the following ZFS operations bash-3.00# zfs get quota store/catB/home/walter NAME PROPERTY VALUE SOURCE
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
...pool/users casesensitivity insensitive - remotepool/users vscan off default remotepool/users nbmand on received remotepool/users sharesmb name=users,guestok=true received remotepool/users refquota none default remotepool/users refreservation none default remotepool/users primarycache all default remotepool/users secondarycache all default remotepool/users usedbysnapshots...
2009 Mar 03
8
zfs list extentions related to pNFS
...default rpool/pnfsds checksum on default rpool/pnfsds compression off default rpool/pnfsds zoned off default rpool/pnfsds copies 1 default rpool/pnfsds refquota none default rpool/pnfsds refreservation none default rpool/pnfsds sharepnfs off default rpool/pnfsds mds none default Thanks, Lisa -------------- next part --------------...
2008 Mar 27
4
dsl_dataset_t pointer during ''zfs create'' changes
I''ve noticed that the dsl_dataset_t that points to a given dataset changes during the life time of a ''zfs create'' command. We start out with one dsl_dataset_t* during dmu_objset_create_sync() but by the time we are later mounting the dataset we have a different in memory dsl_dataset_t* referring to the same dataset. This causes me a big issue with per dataset
2011 Aug 11
6
unable to mount zfs file system..pl help
...ation none - pool1/fs1 casesensitivity sensitive - pool1/fs1 vscan off default pool1/fs1 nbmand off default pool1/fs1 sharesmb off default pool1/fs1 refquota none default pool1/fs1 refreservation none default pool1/fs1 primarycache all default pool1/fs1 secondarycache all default pool1/fs1 usedbysnapshots 0 - pool1...
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
...- fsdata/admin/ENS casesensitivity sensitive - fsdata/admin/ENS vscan off default fsdata/admin/ENS nbmand off default fsdata/admin/ENS sharesmb off default fsdata/admin/ENS refquota none default fsdata/admin/ENS refreservation none default fsdata/admin/ENS primarycache all default fsdata/admin/ENS secondarycache all default fsdata/admin/ENS usedbysnapshots 0...
2013 Mar 06
0
where is the free space?
...- tank/lxc/tipper/brick1 vscan off default tank/lxc/tipper/brick1 nbmand off default tank/lxc/tipper/brick1 sharesmb off default tank/lxc/tipper/brick1 refquota none default tank/lxc/tipper/brick1 refreservation none default tank/lxc/tipper/brick1 primarycache all default tank/lxc/tipper/brick1 secondarycache all...
2010 Jan 06
0
ZFS filesystem size mismatch
...none - storagepool/ndc casesensitivity sensitive - storagepool/ndc vscan off default storagepool/ndc nbmand off default storagepool/ndc sharesmb off default storagepool/ndc refquota none default storagepool/ndc refreservation none default -- This message posted from opensolaris.org
2010 Jan 07
4
link in zpool upgrade -v broken
http://www.opensolaris.org/os/community/zfs/version/ No longer exists. Is there a bug for this yet? -- Ian.
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
...-------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property For more information on a particular version, including supported releases, see: http://www.opensolaris.org/os/community/zfs/version/N Where ''N'' is the...
2010 Jan 07
2
ZFS upgrade.
...-------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting For more information on a particular version, including supported releases, see: http://www.opensolaris.or...
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
...ivity sensitive - rpool vscan off default rpool nbmand off default rpool sharesmb off default rpool refquota none default rpool refreservation none default rpool primarycache all default rpool secondarycache all default...
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
...- zhome/username casesensitivity sensitive - zhome/username vscan off default zhome/username nbmand off default zhome/username sharesmb off default zhome/username refquota none default zhome/username refreservation none default zhome/username primarycache all default zhome/username secondarycache all default zhome/username usedbysnapshots 0...
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
...none - data/mysqldata3 casesensitivity sensitive - data/mysqldata3 vscan off default data/mysqldata3 nbmand off default data/mysqldata3 sharesmb off default data/mysqldata3 refquota none default data/mysqldata3 refreservation none default MySQL Conf Detail: ... [mysqld3] lower_case_table_names=1 user=mysql port = 3308 socket = /usr/local/mysql/sock/mysql3.sock pid-file=/usr/local/mysql/sock/mysql3.pid datadir=/data/mysqldata3/myd...
2018 Mar 01
29
[Bug 13317] New: rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317 Bug ID: 13317 Summary: rsync returns success when target filesystem is full Product: rsync Version: 3.1.2 Hardware: x64 OS: FreeBSD Status: NEW Severity: major Priority: P5 Component: core Assignee: wayned at samba.org
2009 Aug 21
9
Not sure how to do this in zfs
Hello all, I''ve tried changing all kinds of attributes for the zfs''s, but I can''t seem to find the right configuration. So I''m trying to move some zfs''s under another, it looks like this: /pool/joe_user move to /pool/homes/joe_user I know I can do this with zfs rename, and everything is fine. The problem I''m having is, when I mount
2011 Sep 22
4
Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool
Hi, everyone! I have a beginner''s question: I must configure a small file server. It only has two disk drives, and they are (forcibly) destined to be used in a mirrored, hot-spare configuration. The OS is installed and working, and rpool is mirrored on the two disks. The question is: I want to create some ZFS file systems for sharing them via CIFS. But given my limited configuration: