How could I define such a feature. I was unaware of such a timeout. Is
there a feature to prevent or refresh a timeout of less than a second?
Then is the quick solution is to long list the directory path ?
Khoi
-------- Original Message --------
>From : anand.avati at gmail.com
To : Khoi Mai <KHOIMAI at up.com>
Cc : gluster-users <gluster-users at gluster.org>
Sent on : 12/24 02:04:10 AM CST
Subject : Re: [Gluster-users] 11. Re: glusterfs-3.4.2qa4 released
Khoi,
Looking at your logs, my guess is that the client was mounted with a very
high --entry-timeout=N value (or client-2 accesses file, runs strace etc.
all within the default 1sec after client-1 recreated the file through vi).
If not, I don't see how your client could get this log entry:
[2013-12-19 20:08:51.729392] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 42: STAT() /world => -1 (Stale file handle)
If a file is deleted/recreated within the entry-timeout period from another
client, this can happen, and that is a gluster independent FUSE behavior.
Avati
On Thu, Dec 19, 2013 at 12:48 PM, Khoi Mai <KHOIMAI at up.com> wrote:
Gluster community,
? ? ? ? https://bugzilla.redhat.com/show_bug.cgi?id=1041109
I've updated the above bugzilla while testing the latest gluster package
glusterfs-3.4.2qa4 with reproducible results. ?I am unsure if there is any
feature options that can remedy this behavior.
Khoi
From: ? ? ? gluster-users-request at gluster.org
To: ? ? ? gluster-users at gluster.org
Date: ? ? ? 12/19/2013 06:00 AM
Subject: ? ? Gluster-users Digest, Vol 68, Issue 20
Sent by: ? ? gluster-users-bounces at gluster.org
Send Gluster-users mailing list submissions to
? ? ? ? ? ? ? ? gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://supercolony.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
? ? ? ? ? ? ? ? gluster-users-request at gluster.org
You can reach the person managing the list at
? ? ? ? ? ? ? ? gluster-users-owner at gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
? 1. Re: help with replace-brick migrate (Mariusz Sobisiak)
? 2. Re: help with replace-brick migrate (Raphael Rabelo)
? 3. Re: help with replace-brick migrate (Mariusz Sobisiak)
? 4. Problem adding brick (replica) (Sa?a Friedrich)
? 5. Shared storage for critical infrastructure (Pieter Baele)
? 6. Debugging gfapi (Kelly Burkhart)
? 7. Re: [Gluster-devel] glusterfs-3.4.2qa4 released (Kaleb Keithley)
? 8. Cancelled: Gluster Community Weekly Meeting (Vijay Bellur)
? 9. Cancelled: Gluster Community Weekly Meeting (Vijay Bellur)
?10. Gluster Community Weekly Meeting Minutes -- ? ? ? 2013-12-18
? ? ?(Vijay Bellur)
?11. Re: glusterfs-3.4.2qa4 released (Vijay Bellur)
?12. Re: Debugging gfapi (Jeff Darcy)
?13. Trying to start glusterd (Knut Moe)
?14. gfapi from non-root (Kelly Burkhart)
?15. Passing noforget option to glusterfs native client ? ? mounts
? ? ?(Chalcogen)
?16. Re: Passing noforget option to glusterfs native ? ? client mounts
? ? ?(Chalcogen)
?17. Re: Problem adding brick (replica) (Anirban Ghoshal)
?18. Re: Problem adding brick (replica) (Sa?a Friedrich)
?19. Re: gfapi from non-root (Kelly Burkhart)
?20. failed to create volume ends with a prefix of it is ? ? already
? ? ?part of a volume (William Kwan)
?21. Re: Trying to start glusterd (Kaushal M)
?22. Re: qemu remote insecure connections (Vijay Bellur)
?23. Re: failed to create volume ends with a prefix of ? ? it is
? ? ?already part of a volume (Bernhard Glomm)
----------------------------------------------------------------------
Message: 1
Date: Wed, 18 Dec 2013 13:21:26 +0100
From: "Mariusz Sobisiak" <MSobisiak at ydp.pl>
To: <gluster-users at gluster.org>
Subject: Re: [Gluster-users] help with replace-brick migrate
Message-ID:
<507D8C234E515F4F969362F9666D7EBBED1CB7 at nagato1.intranet.ydp>
Content-Type: text/plain; ? ? ? ? ? ? charset="us-ascii"
> I don't knew that can be a lot of trash (orphan) files in .glusterfs,
so here what i do:
I think you can easily check by this command (on old gluster server):
find .glusterfs/ -type f -links 1
If something returns that means file have only one link and doesn't have
a "real" file on the brick so it unintended (and it's orphan
file).
> # du -hs *
> 3.5G documents
> 341G home
> 58G archives
> 808G secure_folder
> 93G secure_folder2
So you have files on new gluster. I understood that you have just
.glusterfs directory...
> 1.3T .glusterfs/
It looks OK. It's not taking any space because it's hardlinks.
> So, i have 1.3Tb in gluster!! So, i think that replace-brick worked
correctly ... right ?
Probably yes.
> So, how can i restart the replace-brick command again ?
I am not sure what for you want to restart the replace-brick command?
You wrote that status show: migration complete... So it's ok and do just
commit (first ensure is everything is OK).
If you're not sure if all files had copied maybe you can compare files
on both nodes (old one and migrated) like this:
find /where/the/brick/is/ -path "*/.glusterfs/*" -prune -o -name
'*'
-print|wc -l
If the command return the same values that mean you have all files :D
But In my opinion everything looks okay (except that why so many files
are orphaned on old glusterfs).
--
Mariusz
------------------------------
Message: 2
Date: Wed, 18 Dec 2013 10:59:12 -0200
From: Raphael Rabelo <rabeloo at gmail.com>
To: Mariusz Sobisiak <MSobisiak at ydp.pl>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] help with replace-brick migrate
Message-ID:
? ? ? ? ? ? ? ? <
CAMOH6nDr8ZCRjdybDfE51V_+4UDCkrTx9xaSfJM5JhXM8fvoqw at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
> I think you can easily check by this command (on old gluster server):
find .glusterfs/ -type f -links 1
If something returns that means file have only one link and doesn't have
a "real" file on the brick so it unintended (and it's orphan
file).
The result of ?# find .glusterfs/ -type f -links 1 is empty ...
> I am not sure what for you want to restart the replace-brick command?
You wrote that status show: migration complete... So it's ok and do just
commit (first ensure is everything is OK).
Before commit the replace, i erased all files on the new node, thinking
that's not ok... :(
I thinked that add these 2 new bricks in the same volume with replica 4,
and use self-heal to replicate all data... what you think ?
Tks!
2013/12/18 Mariusz Sobisiak <MSobisiak at ydp.pl>
> > I don't knew that can be a lot of trash (orphan) files in
.glusterfs,
> so here what i do:
>
> I think you can easily check by this command (on old gluster server):
> find .glusterfs/ -type f -links 1
> If something returns that means file have only one link and doesn't
have
> a "real" file on the brick so it unintended (and it's orphan
file).
>
> > # du -hs *
> > 3.5G documents
> > 341G home
> > 58G archives
> > 808G secure_folder
> > 93G secure_folder2
>
> So you have files on new gluster. I understood that you have just
> .glusterfs directory...
>
> > 1.3T .glusterfs/
>
> It looks OK. It's not taking any space because it's hardlinks.
>
> > So, i have 1.3Tb in gluster!! So, i think that replace-brick worked
> correctly ... right ?
>
> Probably yes.
>
> > So, how can i restart the replace-brick command again ?
>
> I am not sure what for you want to restart the replace-brick command?
> You wrote that status show: migration complete... So it's ok and do
just
> commit (first ensure is everything is OK).
>
> If you're not sure if all files had copied maybe you can compare files
> on both nodes (old one and migrated) like this:
> find /where/the/brick/is/ -path "*/.glusterfs/*" -prune -o -name
'*'
> -print|wc -l
> If the command return the same values that mean you have all files :D
>
> But In my opinion everything looks okay (except that why so many files
> are orphaned on old glusterfs).
>
> --
> Mariusz
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/22c98a96/attachment-0001.html>
------------------------------
Message: 3
Date: Wed, 18 Dec 2013 14:28:41 +0100
From: "Mariusz Sobisiak" <MSobisiak at ydp.pl>
To: <gluster-users at gluster.org>
Subject: Re: [Gluster-users] help with replace-brick migrate
Message-ID:
<507D8C234E515F4F969362F9666D7EBBED1D5A at nagato1.intranet.ydp>
Content-Type: text/plain; ? ? ? ? ? ? charset="iso-8859-2"
> The result of ?# find .glusterfs/ -type f -links 1 is empty ...?
You run it on old gluster (where is 2TB)? It may take a long time. So in
fact it very strange.
You can use that other find command to compare amount of data.
> I thinked that add these 2 new bricks in the same volume with replica 4,
and use self-heal to replicate all data... what you think ?
You can abort replace-brick. And do it again. I thought you want migrate
data to another server, now you want expand volume? Of course if you want
just only expand you can use add-brick command.
--
Mariusz
------------------------------
Message: 4
Date: Wed, 18 Dec 2013 15:38:39 +0100
From: Sa?a Friedrich <sasa.friedrich at bitlab.si>
To: gluster-users at gluster.org
Subject: [Gluster-users] Problem adding brick (replica)
Message-ID: <52B1B36F.1020704 at bitlab.si>
Content-Type: text/plain; charset="windows-1252";
Format="flowed"
Hi!
I have some trouble adding a brick to existing gluster volume.
When I try to (in CLI):
? ?gluster> volume add-brick data_domain replica 3
? ?gluster2.data:/glusterfs/data_domain
I get:
? ?volume add-brick: failed:
I probed the peer successfully, peer status returns:
? ?Hostname: gluster3.data
? ?Uuid: e694f552-636a-4cf3-a04f-997ec87a880c
? ?State: Peer in Cluster (Connected)
? ?Hostname: gluster2.data
? ?Port: 24007
? ?Uuid: 36922d4c-55f2-4cc6-85b9-a9541e5619a2
? ?State: Peer in Cluster (Connected)
Existing volume info:
? ?Volume Name: data_domain
? ?Type: Replicate
? ?Volume ID: ae096e7d-cf0c-46ed-863a-9ecc3e8ce288
? ?Status: Started
? ?Number of Bricks: 1 x 2 = 2
? ?Transport-type: tcp
? ?Bricks:
? ?Brick1: gluster1.data:/glusterfs/data_domain
? ?Brick2: gluster3.data:/glusterfs/data_domain
? ?Options Reconfigured:
? ?storage.owner-gid: 36
? ?storage.owner-uid: 36
? ?server.allow-insecure: on
? ?network.remote-dio: enable
? ?cluster.eager-lock: enable
? ?performance.stat-prefetch: off
? ?performance.io-cache: off
? ?performance.read-ahead: off
? ?performance.quick-read: off
Only thing I found in log is:
? ?(/var/log/glusterfs/cli.log)
? ?[2013-12-18 12:09:17.281310] W [cli-rl.c:106:cli_rl_process_line]
? ?0-glusterfs: failed to process line
? ?[2013-12-18 12:10:07.650267] I
? ?[cli-rpc-ops.c:332:gf_cli_list_friends_cbk] 0-cli: Received resp to
? ?list: 0
? ?(/var/log/glusterfs/etc-glusterfs-glusterd.vol.log)
? ?[2013-12-18 12:12:38.887911] I
? ?[glusterd-brick-ops.c:370:__glusterd_handle_add_brick] 0-management:
? ?Received add brick req
? ?[2013-12-18 12:12:38.888064] I
? ?[glusterd-brick-ops.c:417:__glusterd_handle_add_brick] 0-management:
? ?replica-count is 3
? ?[2013-12-18 12:12:38.888124] I
? ?[glusterd-brick-ops.c:256:gd_addbr_validate_replica_count]
? ?0-management: Changing the replica count of volume data_domain from
? ?2 to 3
I'm running some VM-s on this volume so I'd really like to avoid
restarting glusterd service.
OS is FC19, kernel 3.11.10-200.fc19.x86_64, glusterfs.x86_64 3.4.1-1.fc19
tnx for help!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/4b36895e/attachment-0001.html>
------------------------------
Message: 5
Date: Wed, 18 Dec 2013 16:11:38 +0100
From: Pieter Baele <pieter.baele at gmail.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] Shared storage for critical infrastructure
Message-ID:
? ? ? ? ? ? ? ? <
CADDXySqPg2jkrS4LX2sCC7uPQZ_eKF2JX60OepLFOOnRVJCiDA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Hello,
Some critical infrastructure needs a file share for messaging.
Can gluster (RH Storage) be used for HA purposes?
I was also considering GlusterFS, but I need to be sure that it is
compatible - and handles well -
the requirements from the software vendor:
- Write Order
- Synchronous Write persistence
- Distributed File Locking
- Unique Write Ownership
Sincerely,
PieterB
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/4cd25064/attachment-0001.html>
------------------------------
Message: 6
Date: Wed, 18 Dec 2013 09:23:43 -0600
From: Kelly Burkhart <kelly.burkhart at gmail.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] Debugging gfapi
Message-ID:
? ? ? ? ? ? ? ? <CAND8VyCSM
+E3ecRv1m8CUJc8vcTp_cdCvDfn3ETmsy83p23xEw at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Is there some trick I need to do to use gdb on libgfapi? ?I configured
gluster like this:
./configure --disable-ibverbs --enable-debug
And also tried this:
CFLAGS=-g CPPFLAGS=-g LDFLAGS=-g ./configure --disable-ibverbs
--enable-debug
I can't step into calls like glfs_new, the debugger skips over the call.
Is there some magic that makes gdb think that gfapi is not debuggable?
-K
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/5ea2b239/attachment-0001.html>
------------------------------
Message: 7
Date: Wed, 18 Dec 2013 11:11:40 -0500 (EST)
From: Kaleb Keithley <kkeithle at redhat.com>
To: gluster-users at gluster.org, gluster-devel at nongnu.org
Subject: Re: [Gluster-users] [Gluster-devel] glusterfs-3.4.2qa4
? ? ? ? ? ? ? ? released
Message-ID:
? ? ? ? ? ? ? ? <2119012122.44202998.1387383100132.JavaMail.root at
redhat.com>
Content-Type: text/plain; charset=utf-8
YUM repos for EPEL (5 & 6) and Fedora (18, 19, 20) are at
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.2qa4/
----- Original Message -----
From: "Gluster Build System" <jenkins at build.gluster.org>
To: gluster-users at gluster.org, gluster-devel at nongnu.org
Sent: Monday, December 16, 2013 11:53:40 PM
Subject: [Gluster-devel] glusterfs-3.4.2qa4 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.2qa4/
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.2qa4.tar.gz
This release is made off jenkins-release-53
-- Gluster Build System
_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel
------------------------------
Message: 8
Date: Wed, 18 Dec 2013 11:36:10 -0500 (EST)
From: Vijay Bellur <vbellur at redhat.com>
To: gluster-users at gluster.org, gluster-devel at nongnu.org
Subject: [Gluster-users] Cancelled: Gluster Community Weekly Meeting
Message-ID:
? ? ? ? ? ? ? ? <1076006422.19710808.1387384570071.JavaMail.root at
redhat.com>
Content-Type: text/plain; charset="utf-8"
A single instance of the following meeting has been cancelled:
Subject: Gluster Community Weekly Meeting
Organizer: "Vijay Bellur" <vbellur at redhat.com>
Location: #gluster-meeting on irc.freenode.net
Time: Wednesday, December 25, 2013, 8:30:00 PM - 9:30:00 PM GMT +05:30
Chennai, Kolkata, Mumbai, New Delhi
Invitees: gluster-users at gluster.org; gluster-devel at nongnu.org;
Christian.Heggland at nov.com; vmallika at redhat.com; bobby.jacob at
alshaya.com;
kevin.stevenard at alcatel-lucent.com; radek.dymacz at databarracks.com;
pportant at redhat.com; ryade at mcs.anl.gov; roger at dutchmillerauto.com;
Thomas.Seitner at gugler.at ...
*~*~*~*~*~*~*~*~*~*
Cancelling this instance due to the holiday season.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: meeting.ics
Type: text/calendar
Size: 3392 bytes
Desc: not available
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/dbf372c9/attachment-0001.ics>
------------------------------
Message: 9
Date: Wed, 18 Dec 2013 11:37:05 -0500 (EST)
From: Vijay Bellur <vbellur at redhat.com>
To: gluster-users at gluster.org, gluster-devel at nongnu.org
Subject: [Gluster-users] Cancelled: Gluster Community Weekly Meeting
Message-ID: <83718219.19711692.1387384625186.JavaMail.root at redhat.com>
Content-Type: text/plain; charset="utf-8"
A single instance of the following meeting has been cancelled:
Subject: Gluster Community Weekly Meeting
Organizer: "Vijay Bellur" <vbellur at redhat.com>
Location: #gluster-meeting on irc.freenode.net
Time: Wednesday, January 1, 2014, 8:30:00 PM - 9:30:00 PM GMT +05:30
Chennai, Kolkata, Mumbai, New Delhi
Invitees: gluster-users at gluster.org; gluster-devel at nongnu.org;
Christian.Heggland at nov.com; vmallika at redhat.com; bobby.jacob at
alshaya.com;
kevin.stevenard at alcatel-lucent.com; radek.dymacz at databarracks.com;
pportant at redhat.com; ryade at mcs.anl.gov; roger at dutchmillerauto.com;
Thomas.Seitner at gugler.at ...
*~*~*~*~*~*~*~*~*~*
Cancelling this instance due to the holiday season.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: meeting.ics
Type: text/calendar
Size: 3390 bytes
Desc: not available
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/b3557b59/attachment-0001.ics>
------------------------------
Message: 10
Date: Wed, 18 Dec 2013 23:15:44 +0530
From: Vijay Bellur <vbellur at redhat.com>
To: "'gluster-devel at nongnu.org'" <gluster-devel at
nongnu.org>,
? ? ? ? ? ? ? ? gluster-users Discussion List <Gluster-users at
gluster.org>
Subject: [Gluster-users] Gluster Community Weekly Meeting Minutes --
? ? ? ? ? ? ? ? 2013-12-18
Message-ID: <52B1DF48.60104 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Meeting minutes available at:
http://meetbot.fedoraproject.org/gluster-meeting/2013-12-18/gluster-meeting.2013-12-18-15.00.html
-Vijay
------------------------------
Message: 11
Date: Wed, 18 Dec 2013 23:17:02 +0530
From: Vijay Bellur <vbellur at redhat.com>
To: Luk?? Bezdi?ka <lukas.bezdicka at gooddata.com>
Cc: "gluster-users at gluster.org" <gluster-users at
gluster.org>, Gluster
? ? ? ? ? ? ? ? Devel <gluster-devel at nongnu.org>
Subject: Re: [Gluster-users] glusterfs-3.4.2qa4 released
Message-ID: <52B1DF96.7060809 at redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 12/17/2013 09:27 PM, Luk?? Bezdi?ka wrote:> Quite high memory usage for nfs daemon which we don't use at all.
> USER ? ? ? PID %CPU %MEM ? ?VSZ ? RSS TTY ? ? ?STAT START ? TIME COMMAND
> root ? ? 23246 ?0.0 ?1.9 485524 313116 ? ? Ssl ?15:43 ? 0:00
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
> /var/run/a90b5253b325435599e00f1a6534b95c.socket
>
Can you please check if setting volume option nfs.drc to off brings down
the memory usage?
Thanks,
Vijay
------------------------------
Message: 12
Date: Wed, 18 Dec 2013 12:54:55 -0500
From: Jeff Darcy <jdarcy at redhat.com>
To: Kelly Burkhart <kelly.burkhart at gmail.com>,
? ? ? ? ? ? ? ? gluster-users at gluster.org
Subject: Re: [Gluster-users] Debugging gfapi
Message-ID: <52B1E16F.2030803 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 12/18/2013 10:23 AM, Kelly Burkhart wrote:> Is there some trick I need to do to use gdb on libgfapi? ?I
> configured gluster like this:
>
> ./configure --disable-ibverbs --enable-debug
>
> And also tried this:
>
> CFLAGS=-g CPPFLAGS=-g LDFLAGS=-g ./configure --disable-ibverbs
> --enable-debug
>
> I can't step into calls like glfs_new, the debugger skips over the
> call. ?Is there some magic that makes gdb think that gfapi is not
> debuggable?
The formula for getting GlusterFS to build with the proper flags seems
to change frequently. ?If you look at configure.ac the current magic
seems to be:
? ? ? ? ? ? ? ? export enable_debug=yes
? ? ? ? ? ? ? ? configure/rpmbuild/whatever
That's what I do for my own build, and I'm generally able to step
through anything in a translator. ?I haven't tried with gfapi, but it
should be the same because it's also a library. ?Executables are a
different matter because they're explicitly stripped during the build
process.
------------------------------
Message: 13
Date: Wed, 18 Dec 2013 11:19:24 -0700
From: Knut Moe <kmoe66 at gmail.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] Trying to start glusterd
Message-ID:
? ? ? ? ? ? ? ? <CADXLLPhgNiA1PLOULw06X0b-7gbqi7r2NXaGt20WPevhDmh4A at
mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
I have two Ubuntu servers set up, downloaded GlusterFS 3.4 and by all
accounts it seems to have installed properly using the apt-get install
command.
However, when I issue a glusterd start or glusterd status command I am
getting the following error:
ERROR: failed to create log file (/var/log/glusterfs/start.log) (Permission
denied).
Is there a way to determine if gluster is installed properly and also
troubleshoot the above?
If I issue sudo glusterd start or sudo glusterd status I am returned to the
prompt with no additional info.
Thx.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/933d17a0/attachment-0001.html>
------------------------------
Message: 14
Date: Wed, 18 Dec 2013 13:43:34 -0600
From: Kelly Burkhart <kelly.burkhart at gmail.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] gfapi from non-root
Message-ID:
? ? ? ? ? ? ? ? <CAND8VyCuidxwU7whEYfkG1th-ejtM2oLsM8usDWgK0wfpvohA at
mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
How does one run a gfapi app without being root?
I've set server.allow-insecure on on the server side (and bounced all
gluster processes). ?Is there something else required?
My test program just stats a file on the cluster volume. ?It works as root
and fails as a normal user. ?Local log file shows a message about failing
to bind a privileged port.
-K
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/16f6e37c/attachment-0001.html>
------------------------------
Message: 15
Date: Thu, 19 Dec 2013 01:16:15 +0530
From: Chalcogen <chalcogen_eg_oxygen at yahoo.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] Passing noforget option to glusterfs native
? ? ? ? ? ? ? ? client ? ? ? ? ? ? ? ? mounts
Message-ID: <52B1FB87.8070002 at yahoo.com>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
Hi everybody,
A few months back I joined a project where people want to replace their
legacy fuse-based (twin-server) replicated file-system with GlusterFS.
They also have a high-availability NFS server code tagged with the
kernel NFSD that they would wish to retain (the nfs-kernel-server, I
mean). The reason they wish to retain the kernel NFS and not use the NFS
server that comes with GlusterFS is mainly because there's this bit of
code that allows NFS IP's to be migrated from one host server to the
other in the case that one happens to go down, and tweaks on the export
server configuration allow the file-handles to remain identical on the
new host server.
The solution was to mount gluster volumes using the mount.glusterfs
native client program and then export the directories over the kernel
NFS server. This seems to work most of the time, but on rare occasions,
'stale file handle' is reported off certain clients, which really puts a
damper over the 'high-availability' thing. After suitably instrumenting
the nfsd/fuse code in the kernel, it seems that decoding of the
file-handle fails on the server because the inode record corresponding
to the nodeid in the handle cannot be looked up. Combining this with the
fact that a second attempt by the client to execute lookup on the same
file passes, one might suspect that the problem is identical to what
many people attempting to export fuse mounts over the kernel's NFS
server are facing; viz, fuse 'forgets' the inode records thereby causing
ilookup5() to fail. Miklos and other fuse developers/hackers would point
towards '-o noforget' while mounting their fuse file-systems.
I tried passing ?'-o noforget' to mount.glusterfs, but it does not seem
to recognize it. Could somebody help me out with the correct syntax to
pass noforget to gluster volumes? Or, something we could pass to
glusterfs that would instruct fuse to allocate a bigger cache for our
inodes?
Additionally, should you think that something else might be behind our
problems, please do let me know.
Here's my configuration:
Linux kernel version: 2.6.34.12
GlusterFS versionn: 3.4.0
nfs.disable option for volumes: OFF on all volumes
Thanks a lot for your time!
Anirban
P.s. I found quite a few pages on the web that admonish users that
GlusterFS is not compatible with the kernel NFS server, but do not
really give much detail. Is this one of the reasons for saying so?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131219/35365020/attachment-0001.html>
------------------------------
Message: 16
Date: Thu, 19 Dec 2013 01:40:29 +0530
From: Chalcogen <chalcogen_eg_oxygen at yahoo.com>
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] Passing noforget option to glusterfs
? ? ? ? ? ? ? ? native ? ? ? ? ? ? ? ? client mounts
Message-ID: <52B20135.6030902 at yahoo.com>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
P.s. I think I need to clarify this:
I am only reading from the mounts, and not modifying anything on the
server. and so the commonest causes on stale file handles do not appy.
Anirban
On Thursday 19 December 2013 01:16 AM, Chalcogen wrote:> Hi everybody,
>
> A few months back I joined a project where people want to replace
> their legacy fuse-based (twin-server) replicated file-system with
> GlusterFS. They also have a high-availability NFS server code tagged
> with the kernel NFSD that they would wish to retain (the
> nfs-kernel-server, I mean). The reason they wish to retain the kernel
> NFS and not use the NFS server that comes with GlusterFS is mainly
> because there's this bit of code that allows NFS IP's to be
migrated
> from one host server to the other in the case that one happens to go
> down, and tweaks on the export server configuration allow the
> file-handles to remain identical on the new host server.
>
> The solution was to mount gluster volumes using the mount.glusterfs
> native client program and then export the directories over the kernel
> NFS server. This seems to work most of the time, but on rare
> occasions, 'stale file handle' is reported off certain clients,
which
> really puts a damper over the 'high-availability' thing. After
> suitably instrumenting the nfsd/fuse code in the kernel, it seems that
> decoding of the file-handle fails on the server because the inode
> record corresponding to the nodeid in the handle cannot be looked up.
> Combining this with the fact that a second attempt by the client to
> execute lookup on the same file passes, one might suspect that the
> problem is identical to what many people attempting to export fuse
> mounts over the kernel's NFS server are facing; viz, fuse
'forgets'
> the inode records thereby causing ilookup5() to fail. Miklos and other
> fuse developers/hackers would point towards '-o noforget' while
> mounting their fuse file-systems.
>
> I tried passing ?'-o noforget' to mount.glusterfs, but it does not
> seem to recognize it. Could somebody help me out with the correct
> syntax to pass noforget to gluster volumes? Or, something we could
> pass to glusterfs that would instruct fuse to allocate a bigger cache
> for our inodes?
>
> Additionally, should you think that something else might be behind our
> problems, please do let me know.
>
> Here's my configuration:
>
> Linux kernel version: 2.6.34.12
> GlusterFS versionn: 3.4.0
> nfs.disable option for volumes: OFF on all volumes
>
> Thanks a lot for your time!
> Anirban
>
> P.s. I found quite a few pages on the web that admonish users that
> GlusterFS is not compatible with the kernel NFS server, but do not
> really give much detail. Is this one of the reasons for saying so?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131219/f6b6b5bd/attachment-0001.html>
------------------------------
Message: 17
Date: Thu, 19 Dec 2013 04:27:19 +0800 (SGT)
From: Anirban Ghoshal <chalcogen_eg_oxygen at yahoo.com>
To: Sa?a Friedrich <sasa.friedrich at bitlab.si>,
? ? ? ? ? ? ? ? "gluster-users at gluster.org" <gluster-users at
gluster.org>
Subject: Re: [Gluster-users] Problem adding brick (replica)
Message-ID:
? ? ? ? ? ? ? ? <1387398439.78214.YahooMailNeo at
web193901.mail.sg3.yahoo.com>
Content-Type: text/plain; charset="utf-8"
Ok, I am not associated with, or part of the glusterFS development team in
any way; I fact I only started using glusterfs since the past 3-4 months or
so, but I have often observed that useful info might be found at <log file
dir>/.cmd_history.log, which is, in your case,
/var/log/glusterfs/.cmd_history.log
On Wednesday, 18 December 2013 8:08 PM, Sa?a Friedrich <
sasa.friedrich at bitlab.si> wrote:
Hi!
I have some trouble adding a brick to existing gluster volume.
When I try to (in CLI):
gluster> volume add-brick data_domain replica 3
gluster2.data:/glusterfs/data_domain>
>
I get:
volume add-brick: failed:>
>
I probed the peer successfully, peer status returns:
Hostname: gluster3.data>Uuid: e694f552-636a-4cf3-a04f-997ec87a880c
>State: Peer in Cluster (Connected)
>
>Hostname: gluster2.data
>Port: 24007
>Uuid: 36922d4c-55f2-4cc6-85b9-a9541e5619a2
>State: Peer in Cluster (Connected)
>
>
Existing volume info:
Volume Name: data_domain>Type: Replicate
>Volume ID: ae096e7d-cf0c-46ed-863a-9ecc3e8ce288
>Status: Started
>Number of Bricks: 1 x 2 = 2
>Transport-type: tcp
>Bricks:
>Brick1: gluster1.data:/glusterfs/data_domain
>Brick2: gluster3.data:/glusterfs/data_domain
>Options Reconfigured:
>storage.owner-gid: 36
>storage.owner-uid: 36
>server.allow-insecure: on
>network.remote-dio: enable
>cluster.eager-lock: enable
>performance.stat-prefetch: off
>performance.io-cache: off
>performance.read-ahead: off
>performance.quick-read: off
>
Only thing I found in log is:
(/var/log/glusterfs/cli.log)>[2013-12-18 12:09:17.281310] W [cli-rl.c:106:cli_rl_process_line]
? ? ?0-glusterfs: failed to process line>[2013-12-18 12:10:07.650267] I
? ? ?[cli-rpc-ops.c:332:gf_cli_list_friends_cbk] 0-cli: Received resp
? ? ?to list: 0>
>(/var/log/glusterfs/etc-glusterfs-glusterd.vol.log)
>[2013-12-18 12:12:38.887911] I
? ? ?[glusterd-brick-ops.c:370:__glusterd_handle_add_brick]
? ? ?0-management: Received add brick req>[2013-12-18 12:12:38.888064] I
? ? ?[glusterd-brick-ops.c:417:__glusterd_handle_add_brick]
? ? ?0-management: replica-count is 3>[2013-12-18 12:12:38.888124] I
? ? ?[glusterd-brick-ops.c:256:gd_addbr_validate_replica_count]
? ? ?0-management: Changing the replica count of volume data_domain
? ? ?from 2 to 3>
I'm running some VM-s on this volume so I'd really like to avoid
? ?restarting glusterd service.
OS is FC19, kernel 3.11.10-200.fc19.x86_64, glusterfs.x86_64
? ?3.4.1-1.fc19
tnx for help!
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131219/d9bec9fd/attachment-0001.html>
------------------------------
Message: 18
Date: Wed, 18 Dec 2013 21:32:43 +0100
From: Sa?a Friedrich <sasa.friedrich at bitlab.si>
To: Anirban Ghoshal <chalcogen_eg_oxygen at yahoo.com>,
? ? ? ? ? ? ? ? "gluster-users at gluster.org" <gluster-users at
gluster.org>
Subject: Re: [Gluster-users] Problem adding brick (replica)
Message-ID: <52B2066B.1070606 at bitlab.si>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Here is the line that gets thrown in that log file (which I wasn't aware
of - thanks Anirban)
[2013-12-18 20:31:21.005913] ?: volume add-brick iso_domain replica 2
gluster2.data:/glusterfs/iso_domain : FAILED :
Dne 18. 12. 2013 21:27, pi?e Anirban Ghoshal:> Ok, I am not associated with, or part of the glusterFS development
> team in any way; I fact I only started using glusterfs since the past
> 3-4 months or so, but I have often observed that useful info might be
> found at <log file dir>/.cmd_history.log, which is, in your case,
>
> /var/log/glusterfs/.cmd_history.log
>
>
>
> On Wednesday, 18 December 2013 8:08 PM, Sa?a Friedrich
> <sasa.friedrich at bitlab.si> wrote:
> Hi!
>
> I have some trouble adding a brick to existing gluster volume.
>
> When I try to (in CLI):
>
> ? ? gluster> volume add-brick data_domain replica 3
> ? ? gluster2.data:/glusterfs/data_domain
>
> I get:
>
> ? ? volume add-brick: failed:
>
> I probed the peer successfully, peer status returns:
>
> ? ? Hostname: gluster3.data
> ? ? Uuid: e694f552-636a-4cf3-a04f-997ec87a880c
> ? ? State: Peer in Cluster (Connected)
>
> ? ? Hostname: gluster2.data
> ? ? Port: 24007
> ? ? Uuid: 36922d4c-55f2-4cc6-85b9-a9541e5619a2
> ? ? State: Peer in Cluster (Connected)
>
> Existing volume info:
>
> ? ? Volume Name: data_domain
> ? ? Type: Replicate
> ? ? Volume ID: ae096e7d-cf0c-46ed-863a-9ecc3e8ce288
> ? ? Status: Started
> ? ? Number of Bricks: 1 x 2 = 2
> ? ? Transport-type: tcp
> ? ? Bricks:
> ? ? Brick1: gluster1.data:/glusterfs/data_domain
> ? ? Brick2: gluster3.data:/glusterfs/data_domain
> ? ? Options Reconfigured:
> ? ? storage.owner-gid: 36
> ? ? storage.owner-uid: 36
> ? ? server.allow-insecure: on
> ? ? network.remote-dio: enable
> ? ? cluster.eager-lock: enable
> ? ? performance.stat-prefetch: off
> ? ? performance.io-cache: off
> ? ? performance.read-ahead: off
> ? ? performance.quick-read: off
>
>
> Only thing I found in log is:
>
> ? ? (/var/log/glusterfs/cli.log)
> ? ? [2013-12-18 12:09:17.281310] W [cli-rl.c:106:cli_rl_process_line]
> ? ? 0-glusterfs: failed to process line
> ? ? [2013-12-18 12:10:07.650267] I
> ? ? [cli-rpc-ops.c:332:gf_cli_list_friends_cbk] 0-cli: Received resp
> ? ? to list: 0
>
> ? ? (/var/log/glusterfs/etc-glusterfs-glusterd.vol.log)
> ? ? [2013-12-18 12:12:38.887911] I
> ? ? [glusterd-brick-ops.c:370:__glusterd_handle_add_brick]
> ? ? 0-management: Received add brick req
> ? ? [2013-12-18 12:12:38.888064] I
> ? ? [glusterd-brick-ops.c:417:__glusterd_handle_add_brick]
> ? ? 0-management: replica-count is 3
> ? ? [2013-12-18 12:12:38.888124] I
> ? ? [glusterd-brick-ops.c:256:gd_addbr_validate_replica_count]
> ? ? 0-management: Changing the replica count of volume data_domain
> ? ? from 2 to 3
>
>
> I'm running some VM-s on this volume so I'd really like to avoid
> restarting glusterd service.
> OS is FC19, kernel 3.11.10-200.fc19.x86_64, glusterfs.x86_64 3.4.1-1.fc19
>
>
> tnx for help!
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/f357938f/attachment-0001.html>
------------------------------
Message: 19
Date: Wed, 18 Dec 2013 14:49:28 -0600
From: Kelly Burkhart <kelly.burkhart at gmail.com>
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] gfapi from non-root
Message-ID:
? ? ? ? ? ? ? ? <CAND8VyDWV3qZCQoOwYg2f1GVdhEn4LO6ch8u_uN5R2w9kBZaA at
mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
OK, I believe I solved it by doing this by 1. setting the volume property
allow-insecure with the following command:
gluster volume set gv0 server.allow-insecure on
and 2. editing the /usr/local/etc/glusterfs/glusterd.vol file and adding
the following line between 'volume-management' and 'end-volume':
? ?option rpc-auth-allow-insecure on
Is there some mechanism for setting glusterd.vol options without manually
editing a file on each host in the cluster?
If I add a new host to the cluster at a later point, will it slurp the
glusterd.vol file from one of the already established hosts? ?Or do I have
to manage keeping this file identical on every host?
-K
On Wed, Dec 18, 2013 at 1:43 PM, Kelly Burkhart <kelly.burkhart at
gmail.com>wrote:
> How does one run a gfapi app without being root?
>
> I've set server.allow-insecure on on the server side (and bounced all
> gluster processes). ?Is there something else required?
>
> My test program just stats a file on the cluster volume. ?It works as
root> and fails as a normal user. ?Local log file shows a message about failing
> to bind a privileged port.
>
> -K
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/5c79ada1/attachment-0001.html>
------------------------------
Message: 20
Date: Wed, 18 Dec 2013 14:00:59 -0800 (PST)
From: William Kwan <potatok at yahoo.com>
To: "gluster-users at gluster.org" <gluster-users at
gluster.org>
Subject: [Gluster-users] failed to create volume ends with a prefix of
? ? ? ? ? ? ? ? it is ? ? ? ? ? ? ? ? already part of a volume
Message-ID:
? ? ? ? ? ? ? ? <1387404059.18772.YahooMailNeo at
web140403.mail.bf1.yahoo.com>
Content-Type: text/plain; charset="iso-8859-1"
Hi all,
Env: CentOS 6.5 with?glusterfs 3.4.1?
I just start working on Gluster. ?I have two test hosts. ?Both of them have
a xfs on top of LVM. ? I searched, but there are lots of result like this.
I'm not sure if this is a bug in my version?
# gluster volume create gvol1 replica 2 transport tcp ghost1:/data
ghost2:/data
volume create: gvol1: failed
# gluster volume list all
No volumes present in cluster
# gluster volume create gvol1 replica 2 transport tcp ghost1:/data
ghost2:/data
volume create: gvol1: failed: /data or a prefix of it is already part of a
volume
Thanks
Will
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/c1967735/attachment-0001.html>
------------------------------
Message: 21
Date: Thu, 19 Dec 2013 07:38:35 +0530
From: Kaushal M <kshlmster at gmail.com>
To: Knut Moe <kmoe66 at gmail.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] Trying to start glusterd
Message-ID:
? ? ? ? ? ? ? ? <CAOujamU6qu5x6vdXSrOVsdV36xnVJgp+po+tmoJh00Fk9VM4g at
mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
All gluster processes need to be run as root. So you will need to use
'sudo' to run any command.
On Ubuntu, the correct way to start Glusterd is by using 'service glusterd
start'. You can check if glusterd is running using 'service glusterd
status'.
On 18-Dec-2013 11:49 PM, "Knut Moe" <kmoe66 at gmail.com> wrote:
> I have two Ubuntu servers set up, downloaded GlusterFS 3.4 and by all
> accounts it seems to have installed properly using the apt-get install
> command.
>
> However, when I issue a glusterd start or glusterd status command I am
> getting the following error:
>
> ERROR: failed to create log file (/var/log/glusterfs/start.log)
> (Permission denied).
>
> Is there a way to determine if gluster is installed properly and also
> troubleshoot the above?
>
> If I issue sudo glusterd start or sudo glusterd status I am returned to
> the prompt with no additional info.
>
> Thx.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131219/f2f750a4/attachment-0001.html>
------------------------------
Message: 22
Date: Thu, 19 Dec 2013 12:34:59 +0530
From: Vijay Bellur <vbellur at redhat.com>
To: Joe Topjian <joe at topjian.net>
Cc: gluster-users at gluster.org, Bharata B Rao
? ? ? ? ? ? ? ? <bharata at linux.vnet.ibm.com>
Subject: Re: [Gluster-users] qemu remote insecure connections
Message-ID: <52B29A9B.3040503 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 12/16/2013 08:42 AM, Joe Topjian wrote:> Hello,
>
> I apologize for the delayed reply.
>
> I've collected some logs and posted them here:
> https://gist.github.com/jtopjian/7981763
>
> I stopped the Gluster service on 192.168.1.11, moved /var/log/glusterfs
> to a backup, then started Gluster so that the log files were more
succinct.>
> I then used the qemu-img command as mentioned before as root, which was
> successful. Then I ran the command as libvirt-qemu and let the command
> hang for 2 minutes before I killed it.
>
I did not notice anything in the logs which refer to failures from a
gluster perspective. This is observed in the log file:
"[2013-12-16 02:58:16.078774] I
[client-handshake.c:1456:client_setvolume_cbk] 0-instances-client-1:
Connected to 192.168.1.12:49152, attached to remote volume
'/gluster/instances'."
It does look like a connection has been established but qemu-img is
blocked on something. Can you please start qemu-img with strace -f and
capture the output?
Bharata: Any additional things that we could try here?
Thanks,
Vijay
------------------------------
Message: 23
Date: Thu, 19 Dec 2013 08:55:39 +0000
From: "Bernhard Glomm" <bernhard.glomm at ecologic.eu>
To: potatok at yahoo.com, gluster-users at gluster.org
Subject: Re: [Gluster-users] failed to create volume ends with a
? ? ? ? ? ? ? ? prefix of ? ? ? ? ? ? ? ? it is already part of a volume
Message-ID: <0c18df4ee6aa911fd20cfe3ed5ab2ad2d4e3384c at ecologic.eu>
Content-Type: text/plain; charset="utf-8"
Hi Will,
Had similar issues.
Did you see?
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
run:
setfattr -x trusted.glusterfs.volume-id $brick_path
setfattr -x trusted.gfid $brick_path
rm -rf $brick_path/.glusterfs
on BOTH/ALL sides of your mirror
than run
gluster peer probe <partnerhost>
on BOTH/ALL sides of your mirror
only than run?
gluster volume create ....
hth
Bernhard
Am 18.12.2013 23:00:59, schrieb William Kwan:> Hi all,
>
> Env: CentOS 6.5 with?glusterfs 3.4.1?
>
> I just start working on Gluster. ?I have two test hosts. ?Both of them
have a xfs on top of LVM. ? I searched,
but there are lots of result like this. I'm not sure if this is a bug in my
version?>
> # gluster volume create gvol1 replica 2 transport tcp ghost1:/data
ghost2:/data> > volume create: gvol1: failed
> > # gluster volume list all
> > No volumes present in cluster
> > # gluster volume create gvol1 replica 2 transport tcp ghost1:/data
ghost2:/data>
> > volume create: gvol1: failed: /data or a prefix of it is already part
of a volume>
>
> > Thanks
> > Will
>
>
> > _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
? ? ? ? ? ?Bernhard Glomm
? ? ? ? ? ?IT Administration
? ? ? ? ? ? ? ? ?Phone:
? ? ? ? ? ? ? ? ?+49 (30) 86880 134
? ? ? ? ? ? ? ? ?Fax:
? ? ? ? ? ? ? ? ?+49 (30) 86880 100
? ? ? ? ? ? ? ? ?Skype:
? ? ? ? ? ? ? ? ?bernhard.glomm.ecologic
? ? ? ? ?Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 |
10717 Berlin | Germany
? ? ? ? ?GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 |
USt/VAT-IdNr.: DE811963464
? ? ? ? ?Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige
GmbH
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131219/382a606b/attachment-0001.html>
------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 68, Issue 20
*********************************************
**
This email and any attachments may contain information that is confidential
and/or privileged for the sole use of the intended recipient. Any use,
review, disclosure, copying, distribution or reliance by others, and any
forwarding of this email or its contents, without the express permission of
the sender is strictly prohibited by law. If you are not the intended
recipient, please contact the sender immediately, delete the e-mail and
destroy all copies.
**
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
**
This email and any attachments may contain information that is confidential
and/or privileged for the sole use of the intended recipient. Any use, review,
disclosure, copying, distribution or reliance by others, and any forwarding of
this email or its contents, without the express permission of the sender is
strictly prohibited by law. If you are not the intended recipient, please
contact the sender immediately, delete the e-mail and destroy all copies.
**
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131224/8f099cfc/attachment.html>