search for: refquotas

Displaying 20 results from an estimated 27 matches for "refquotas".

Did you mean: refquota
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2009 Jan 16
2
Problem setting quotas on a zfs pool
Solaris 10 5/08 Customer migrated to a new emc array with a snap shot and did a send and receive. He is now trying to set quotas on the zfs file system and getting the following error. [root at osprey /] # zfs set quota=800g target/u05 cannot set property for ''target/u05'': size is less than current used or reserved space [root at osprey /] # zfs list -o
2008 Mar 20
5
Snapshots silently eating user quota
All, I assume this issue is pretty old given the time ZFS has been around. I have tried searching the list but could not get understand the structure of how ZFS actually takes snapshot space into account. I have a user walter on whom I try to do the following ZFS operations bash-3.00# zfs get quota store/catB/home/walter NAME PROPERTY VALUE SOURCE
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2008 Mar 27
4
dsl_dataset_t pointer during ''zfs create'' changes
I''ve noticed that the dsl_dataset_t that points to a given dataset changes during the life time of a ''zfs create'' command. We start out with one dsl_dataset_t* during dmu_objset_create_sync() but by the time we are later mounting the dataset we have a different in memory dsl_dataset_t* referring to the same dataset. This causes me a big issue with per dataset
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
All, Running Samba 3.5.4 on Solaris 10 with ZFS file system. I have issues where we have shared group folders. In these folders a userA in GroupA create file just fine with the correct inherited permissions 660. Problem is when userB in GroupA reads and modifies that file, with M$ office apps, the permissions get whacked to 060+ and the file becomes read only by everyone. I did
2013 Mar 06
0
where is the free space?
hi All, Ubuntu 12.04 and glusterfs 3.3.1. root at tipper:/data# df -h /data Filesystem Size Used Avail Use% Mounted on tipper:/data 2.0T 407G 1.6T 20% /data root at tipper:/data# du -sh . 10G . root at tipper:/data# du -sh /data 13G /data It's quite confused. I also tried to free up the space by stopping the machine (actually LXC VM) with no lock. After umounting the space
2010 Jan 06
0
ZFS filesystem size mismatch
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a ''du -sh'' on the filesystem root, I only get appr. 300GB which is the correct size. The file system became full during Christmas and I increased the quota from 1 to 1.5 to 2TB and then decreased to 1.5TB. No reservations. Files and processes that filled up the file systems have been removed/stopped.
2010 Jan 07
4
link in zpool upgrade -v broken
http://www.opensolaris.org/os/community/zfs/version/ No longer exists. Is there a bug for this yet? -- Ian.
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
Hi. Using ZFS-FUSE. $SUBJECT happened 3 out of 5 times while testing, just wanna know if someone has seen such scenario before. Steps: ------------------------------------------------------------ root at localhost:/# uname -a Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009 i686 GNU/Linux root at localhost:/# zpool upgrade -v This system is currently running ZFS pool
2010 Jan 07
2
ZFS upgrade.
Hello, Is there a way to upgrade my current ZFS version. I show the version could be as high as 22. I tried the command below. It seems that you can only upgrade by upgrade the OS release. [ilmcoso0vs056:root] / # zpool upgrade -V 16 tank invalid version ''16'' usage: upgrade upgrade -v upgrade [-V version] <-a | pool ...> [ilmcoso0vs056:root] /
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
Hello, I'm currently using dovecot 1.2.11 on FreeBSD 8.0 with ZFS filesystems. So far, so good, it works quite nicely, but I have a couple glitches. Each user has his own zfs partition, mounted on /home/<user> (easier to set per user quotas) and mail is stored in their home. From day one, when people check their mail via imap, a lot of indexes corruption occured : dovecot:
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all I did some test about MySQL''s Insert performance on ZFS, and met a big performance problem,*i''m not sure what''s the point*. Environment 2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel). A Java client run 8 threads concurrency insert into one Innodb table: *~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1 ~600 qps when sync_binlog=10
2018 Mar 01
29
[Bug 13317] New: rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317 Bug ID: 13317 Summary: rsync returns success when target filesystem is full Product: rsync Version: 3.1.2 Hardware: x64 OS: FreeBSD Status: NEW Severity: major Priority: P5 Component: core Assignee: wayned at samba.org
2009 Aug 21
9
Not sure how to do this in zfs
Hello all, I''ve tried changing all kinds of attributes for the zfs''s, but I can''t seem to find the right configuration. So I''m trying to move some zfs''s under another, it looks like this: /pool/joe_user move to /pool/homes/joe_user I know I can do this with zfs rename, and everything is fine. The problem I''m having is, when I mount
2011 Sep 22
4
Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool
Hi, everyone! I have a beginner''s question: I must configure a small file server. It only has two disk drives, and they are (forcibly) destined to be used in a mirrored, hot-spare configuration. The OS is installed and working, and rpool is mirrored on the two disks. The question is: I want to create some ZFS file systems for sharing them via CIFS. But given my limited configuration: