Hello Brian,
I synced my gluster repository back to July 5th and tried quota on a certain
dir of a distribute and the quota was implemeted properly on that, here are the
logs,
[root at centos-qa-client-3 glusterfs]# /root/july6git/inst/sbin/gluster
volume quota dist list
path limit_set size
----------------------------------------------------------------------------------
/dir 10485760 10485760
[root at centos-qa-client-3 glusterfs]# /root/july6git/inst/sbin/gluster volume
info
Volume Name: dist
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp,rdma
Bricks:
Brick1: 10.1.12.134:/mnt/dist
Brick2: 10.1.12.135:/mnt/dist
Options Reconfigured:
features.limit-usage: /dir:10MB
features.quota: on
[root at centos-qa-client-3 glusterfs]#
requesting you to please inform us about the <commit id> to which your
workspace is synced.
Thanks,
Saurabh
________________________________________
From: gluster-users-bounces at gluster.org [gluster-users-bounces at
gluster.org] on behalf of gluster-users-request at gluster.org
[gluster-users-request at gluster.org]
Sent: Friday, July 08, 2011 12:30 AM
To: gluster-users at gluster.org
Subject: Gluster-users Digest, Vol 39, Issue 13
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-request at gluster.org
You can reach the person managing the list at
gluster-users-owner at gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. Re: Issue with Gluster Quota (Brian Smith)
2. Re: Issues with geo-rep (Carl Chenet)
----------------------------------------------------------------------
Message: 1
Date: Thu, 07 Jul 2011 13:10:06 -0400
From: Brian Smith <brs at usf.edu>
Subject: Re: [Gluster-users] Issue with Gluster Quota
To: gluster-users at gluster.org
Message-ID: <4E15E86E.6030407 at usf.edu>
Content-Type: text/plain; charset=ISO-8859-1
Sorry about that. I re-populated with an 82MB dump from dd:
[root at gluster1 ~]# gluster volume quota home list
path limit_set size
----------------------------------------------------------------------------------
/brs 10485760 81965056
[root at gluster1 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
getfattr: Removing leading '/' from absolute path names
# file: glusterfs/home/brs
security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000006000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x0000000000006000
[root at gluster2 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
getfattr: Removing leading '/' from absolute path names
# file: glusterfs/home/brs
security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000004e25000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x0000000004e25000
Brian Smith
Senior Systems Administrator
IT Research Computing, University of South Florida
4202 E. Fowler Ave. ENB308
Office Phone: +1 813 974-1467
Organization URL: http://rc.usf.edu
On 07/07/2011 04:50 AM, Mohammed Junaid wrote:>>
>> [root at gluster1 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
>> getfattr: Removing leading '/' from absolute path names
>> # file: glusterfs/home/brs
>> security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
>> trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
>> trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
>>
>>
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000006000
>> trusted.glusterfs.quota.dirty=0x3000
>> trusted.glusterfs.quota.size=0x0000000000006000
>>
>> and
>>
>> [root at gluster2 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
>> getfattr: Removing leading '/' from absolute path names
>> # file: glusterfs/home/brs
>> security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
>> trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
>> trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
>>
>>
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000002000
>> trusted.glusterfs.quota.dirty=0x3000
>> trusted.glusterfs.quota.size=0x0000000000002000
>
>
> trusted.glusterfs.quota.size=0x0000000000006000
> trusted.glusterfs.quota.size=0x0000000000002000
>
> So, quota adds these values to calculate the size of the directory. They
are
> in hex so when I add them the value comes upto 32kb. So I suspect that you
> have deleted some data from your volume. These values wont be helpful now.
> Can you please re-run the same test and report the values when such a
> problem occurs again.
>
------------------------------
Message: 2
Date: Thu, 07 Jul 2011 19:56:46 +0200
From: Carl Chenet <chaica at ohmytux.com>
Subject: Re: [Gluster-users] Issues with geo-rep
To: gluster-users at gluster.org
Message-ID: <4E15F35E.1060404 at ohmytux.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 07/07/2011 15:25, Kaushik BV wrote:> Hi Chaica,
>
> This primarily means that the RPC communtication between the master
> gsyncd module and slave gsyncd module is broken, this could happen to
> various reasons. Check if it satisies all the pre-requisites:
>
> - If FUSE is installed in the machine, since Geo-replication module
> mounts the GlusterFS volume using FUSE to sync data.
> - If the Slave is a volume, check if the volume is started.
> - If the Slave is a plain directory, check if the directory has been
> created already with the desired permissions (Not applicable in your case)
> - If Glusterfs 3.2 is not installed in the default location (in Master)
> and has been prefixed to be installed in a custom location, configure
> the *gluster-command* for it to point to exact location.
> - If Glusterfs 3.2 is not installed in the default location (in slave)
> and has been prefixed to be installed in a custom location, configure
> the *remote-gsyncd-command* for it to point to exact place where gsyncd
> is located.
> - locate the slave log and see if it has any anomalies.
> - Passwordless SSH is set up properly between the host and the remote
> machine ( Not applicable in your case)
Ok the situation has slightly evolved. Now I do have a slave log and
clearer error message on the master :
[2011-07-07 19:53:16.258866] I [monitor(monitor):42:monitor] Monitor:
------------------------------------------------------------
[2011-07-07 19:53:16.259073] I [monitor(monitor):43:monitor] Monitor:
starting gsyncd worker
[2011-07-07 19:53:16.332720] I [gsyncd:286:main_i] <top>: syncing:
gluster://localhost:test-volume -> ssh://192.168.1.32::test-volume
[2011-07-07 19:53:16.343554] D [repce:131:push] RepceClient: call
6302:140305661662976:1310061196.34 __repce_version__() ...
[2011-07-07 19:53:20.931523] D [repce:141:__call__] RepceClient: call
6302:140305661662976:1310061196.34 __repce_version__ -> 1.0
[2011-07-07 19:53:20.932172] D [repce:131:push] RepceClient: call
6302:140305661662976:1310061200.93 version() ...
[2011-07-07 19:53:20.933662] D [repce:141:__call__] RepceClient: call
6302:140305661662976:1310061200.93 version -> 1.0
[2011-07-07 19:53:20.933861] D [repce:131:push] RepceClient: call
6302:140305661662976:1310061200.93 pid() ...
[2011-07-07 19:53:20.934525] D [repce:141:__call__] RepceClient: call
6302:140305661662976:1310061200.93 pid -> 10075
[2011-07-07 19:53:20.957355] E [syncdutils:131:log_raise_exception]
<top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/gsyncd.py",
line
102, in main
main_i()
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/gsyncd.py",
line
293, in main_i
local.connect()
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/resource.py",
line 379, in connect
raise RuntimeError("command failed: " + " ".join(argv))
RuntimeError: command failed: /usr/sbin/glusterfs --xlator-option
*-dht.assert-no-child-down=true -L DEBUG -l
/var/log/glusterfs/geo-replication/test-volume/ssh%3A%2F%2Froot%40192.168.1.32%3Agluster%3A%2F%2F127.0.0.1%3Atest-volume.gluster.log
-s localhost --volfile-id test-volume --client-pid=-1
/tmp/gsyncd-aux-mount-hy6T_w
[2011-07-07 19:53:20.960621] D [monitor(monitor):58:monitor] Monitor:
worker seems to be connected (?? racy check)
[2011-07-07 19:53:21.962501] D [monitor(monitor):62:monitor] Monitor:
worker died in startup phase
The command launched by glusterfs returns a 255 error shell code, which
I belive means the command is terminated by a signal. On the slave log I
have :
[2011-07-07 19:54:49.571549] I [fuse-bridge.c:3218:fuse_thread_proc]
0-fuse: unmounting /tmp/gsyncd-aux-mount-z2Q2Hg
[2011-07-07 19:54:49.572459] W [glusterfsd.c:712:cleanup_and_exit]
(-->/lib/libc.so.6(clone+0x6d) [0x7f2c8998b02d]
(-->/lib/libpthread.so.0(+0x68ba) [0x7f2c89c238ba]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xc5) [0x7f2c8a8f51b5])))
0-: received signum (15), shutting down
[2011-07-07 19:54:51.280207] W [write-behind.c:3029:init]
0-test-volume-write-behind: disabling write-behind for first 0 bytes
[2011-07-07 19:54:51.291669] I [client.c:1935:notify]
0-test-volume-client-0: parent translators are ready, attempting connect
on transport
[2011-07-07 19:54:51.292329] I [client.c:1935:notify]
0-test-volume-client-1: parent translators are ready, attempting connect
on transport
[2011-07-07 19:55:38.582926] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-test-volume-client-0: changing port to 24009 (from 0)
[2011-07-07 19:55:38.583456] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-test-volume-client-1: changing port to 24009 (from 0)
Bye,
Carl Chenet
------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 39, Issue 13
*********************************************