Displaying 20 results from an estimated 10000 matches similar to: "Question: zfs set userquota not working on existing datasets"
2009 May 20
5
ZFS userquota groupquota test
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
I''m currently copying over one of the smaller user areas, and setting up
their quotas, so I have yet to start large scale testing. But the
initial work is very promising. (Just 90G data, 341694 accounts)
Using
2009 Nov 26
5
rquota didnot show userquota (Solaris 10)
Hi,
we have a new fileserver running on X4275 hardware with Solaris 10U8.
On this fileserver we created one test dir with quota and mounted these
on another Solaris 10 system. Here the quota command didnot show the
used quota. Does this feature only work with OpenSolaris or is it
intended to work on Solaris 10?
Here what we did on the server:
# zfs create -o mountpoint=/export/home2
2019 Jan 03
2
doveadm_allowed_commands doesn't work as expected
Trying to limit the API calls to doveadm-http-api by configure allowed
commands, but once the commands added to the list, the RestAPI no longer
work.
1) Return correct reply when doveadm_allowed_commands is empty
# curl -k -H "Content-Type: application/json" -H "Authorization:
X-Dovecot-API <base64 api key>" https://localhost:9088/doveadm/v1
2019 Jan 03
0
doveadm_allowed_commands doesn't work as expected
> On 03 January 2019 at 22:45 Ronald Poon <ronaldpoon at ud.hk> wrote:
>
>
> Trying to limit the API calls to doveadm-http-api by configure allowed
> commands, but once the commands added to the list, the RestAPI no longer
> work.
>
>
> 1) Return correct reply when doveadm_allowed_commands is empty
>
> # curl -k -H "Content-Type:
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752
Summary: zfs set keysource no longer works on existing pools
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: blocker
Priority: P1
Component: other
AssignedTo:
2011 Dec 05
2
Strange quota problem
I have a strange problems with quota on v2.0.14. We have an ldap user
directory, and all users should have a mailQuota defined there. My
problem is that some users gets the quota enforced, while others don't,
and "doveadm user" doesn't seem to agree with "doveadm quota ge gett"
Ref:
$ doveadm user janfrode at example.net
userdb: janfrode at example.net
home
2010 May 05
0
zfs destroy -f and dataset is busy?
We have a pair of opensolaris systems running snv_124. Our main zpool
''z'' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy ''z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00'':
dataset is busy
I have tried:
Unable to destroy numerous datasets even with a -f option.
2013 Dec 04
1
Question regarding quotas (is this a bug or intended behavior) ?
Hi,
I was wondering if this is a normal behavior (test was made using
Dovecot v2.2.9).
In my config, quotas are configured as follows:
plugin {
quota = dict:Userquota::file:%h/dovecot-quota
quota_rule = *:storage=1G
quota_rule2 = Trash:ignore
}
# doveadm mailbox status -u my_user "messages vsize" '*'
Trash messages=4997 vsize=229535631
Drafts messages=0 vsize=0
Sent
2010 Sep 20
5
create mirror copy of existing zfs stack
Hi,
I have a mirror pool tank having two devices underneath. Created in this way
#zpool create tank mirror c3t500507630E020CEAd1 c3t500507630E020CEAd0
Created file system tank/home
#zfs create tank/home
Created another file system tank/home
#zfs create tank/home/sridhar
After that I have created files and directories under tank/home and tank/home/sridhar.
Now I detached 2nd device i.e
2008 Jan 07
0
CR 6647661 <User 1-5Q-12446>, Now responsible engineer P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset
Due to a change requested by <User 1-5Q-12446>,
<User 1-5Q-12446> is now the responsible engineer for:
CR 6647661 changed on Jan 7 2008 by <User 1-5Q-12446>
=== Field ============ === New Value ============= === Old Value =============
Responsible Engineer
2009 Apr 15
0
CR 6647661 Updated, P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset
CR 6647661 changed on Apr 15 2009 by <User 1-ERV-6>
=== Field ============ === New Value ============= === Old Value =============
See Also 6828754
====================== ===========================
2017 Apr 23
0
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thies C. Arntzen wrote:
> Hi,
>
> I’m new here so apologies if this has been answered before.
>
> I have a box that uses ZFS for everything (ubuntu 17.04) and I want to
> create a libvirt pool on that. My ZFS pool is named „big"
>
> So i do:
>
> > zfs create big/zpool
> > virsh pool-define-as --name zpool --source-name big/zpool --type zfs
> >
2017 Apr 14
2
ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Hi,
I’m new here so apologies if this has been answered before.
I have a box that uses ZFS for everything (ubuntu 17.04) and I want to
create a libvirt pool on that. My ZFS pool is named „big"
So i do:
> zfs create big/zpool
> virsh pool-define-as --name zpool --source-name big/zpool --type zfs
> virsh pool-start zpool
> virsh pool-autostart zpool
> virsh pool-list
>
2006 Jun 19
0
snv_42 zfs/zpool dump core and kernel/fs/zfs won''t load.
I''m pretty sure this is my fault but I need some help in fixing the system.
It was installed at one point with snv_29 with the pre integration
SUNWzfs package. I did a live upgrade to snv_42 but forgot to remove
the old SUNWzfs before I did so. When the system booted up got
complaints about kstat install because I still had an old zpool kernel
module lying around.
So I did pkgrm
2017 Apr 24
1
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thank you for your reply.
I have managed to create a virtual machine on my ZFS-filesystem using
virt-install:-) It seems to me that my version of libvirt (Ubuntu 17.04)
has problems enumerating the devices when "virsh vol-list“ is used. The
volumes are available for virt-install but not thru virsh or virt-manager.
As to when the volumes disappear in virsh vol-list - I have no idea. I’m
not
2012 Sep 19
5
Dovecot deliver Segmentation fault when arrive the first message
Hi,
I have found this strange problem. I'm working with Debian 6, dovecot
2.1.9 and vpopmail-auth.
LDA is configured and works fine but the problem is when the first
message arrive "dovecot-lda" return a "Segmentation fault", the message
is written to the user's Mailbox but the message remains, also, in the
queue of qmail (deferral: Segmentation_fault/) and at the
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768448 size=512 error=6
2012 Jan 06
4
ZFS Upgrade
Dear list,
I''m about to upgrade a zpool from 10 to 29 version, I suppose that
this upgrade will improve several performance issues that are present
on 10, however
inside that pool we have several zfs filesystems all of them are
version 1 my first question is is there a problem with performance or
any other problem if you operate a zpool 29 with zfs filesystems
version 1 ?
Is it better
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986
Summary: ''zfs destroy'' hangs on encrypted dataset
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other