Hi, When self-healing is triggered? As you can see below it has been triggered however I checked the logs and there was not any disconnection from the FTP servers.So, I can?t understand why it has been triggered. Client-7 comes online, so may the image differ due some file corrupted? Or for some reason the ftp server was not able to write in one of the replicated storages (client-6 and client-7)? [2012-05-22 17:06:06.133382] I [client-handshake.c:863:client_setvolume_cbk] 0-client-7: Connected to 194.14.241.42:10001, attached to remote volume 'brick1'.[2012-05-22 17:06:06.133410] I [afr-common.c:2552:afr_notify] 0-replicate-3: Subvolume 'client-7' came back up; going online.[2012-05-22 17:06:06.138600] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse: switched graph to 0[2012-05-22 17:06:06.138805] I [fuse-bridge.c:2897:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.16[2012-05-22 17:06:06.139359] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-0: added root inode[2012-05-22 17:06:06.140799] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-1: added root inode[2012-05-22 17:06:06.140841] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-2: added root inode[2012-05-22 17:06:06.141267] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-3: added root inode[2012-05-22 17:06:06.151597] I [client-handshake.c:863:client_setvolume_cbk] 0-client-6: Connected to 194.14.242.42:10001, attached to remote volume 'brick1'.[2012-05-22 17:06:06.151715] I [client-handshake.c:863:client_setvolume_cbk] 0-client-4: Connected to 194.14.242.42:10000, attached to remote volume 'brick0'.[2012-05-22 17:21:30.895793] I [afr-common.c:716:afr_lookup_done] 0-replicate-3: background entry self-heal triggered. path: /02720-store0/2012-05-21[2012-05-22 17:21:30.914237] I [afr-self-heal-common.c:1527:afr_self_heal_completion_cbk] 0-replicate-3: background entry self-heal completed on /02720-store0/2012-05-21 Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: <supercolony.gluster.org/pipermail/gluster-users/attachments/20120529/2c24ed70/attachment.html>
On Tue, May 29, 2012 at 8:22 AM, Flavio Oliveira <nvezes at live.com> wrote:> Hi, > > I am using GlusterFS 3.1 (Linux x86_64), you can see the volume > configuration below. >I still don't know exactly how are your servers and clients setup and connecting to each other. What is the hole of FTP servers in your setup? As client-7 comes on-line it's just fair that it might have some pending changes to be written to it. Please describe with detail what you are trying to accomplish. Regards, Rodrigo> > volume client-6 > type protocol/client > option remote-host 194.14.242.20 > option remote-subvolume brick1 > option transport-type tcp > option ping-timeout 5 > end-volume > > volume client-7 > type protocol/client > option remote-host 194.14.241.20 > option remote-subvolume brick1 > option transport-type tcp > option ping-timeout 5 > end-volume > > volume replicate-3 > type cluster/replicate > option read-subvolume client-6 > subvolumes client-6 client-7 > end-volume > > volume write-behind > type performance/write-behind > subvolumes dht > end-volume > > volume read-ahead > type performance/read-ahead > subvolumes write-behind > end-volume > > volume io-cache > type performance/io-cache > subvolumes read-ahead > end-volume > > volume > type performance/quick-read > subvolumes io-cache > end-volume > > volume io-stats > type debug/io-stats > subvolumes quick-read > end-volume > > volume read-only > type features/read-only > subvolumes io-stats > end-volume > > Regards > ------------------------------ > Date: Tue, 29 May 2012 07:20:16 -0300 > Subject: Re: [Gluster-users] When self-healing is triggered? > From: rodrigo at fabricadeideias.com > To: nvezes at live.com > > > On Tue, May 29, 2012 at 5:40 AM, Flavio Pessoa <nvezes at live.com> wrote: > > Hi, > > > > When self-healing is triggered? As you can see below it has been triggered > however I checked the logs and there was not any disconnection from the FTP > servers. > > So, I can?t understand why it has been triggered. Client-7 comes online, > so may the image differ due some file corrupted? Or for some reason the ftp > server was not able > > to write in one of the replicated storages (client-6 and client-7)? > > > Could you please describe your network setup: servers/bricks/clients? > > > Rodrigo >-------------- next part -------------- An HTML attachment was scrubbed... URL: <supercolony.gluster.org/pipermail/gluster-users/attachments/20120529/d1ce078e/attachment.html>
On Tue, May 29, 2012 at 11:05 AM, Flavio Oliveira <nvezes at live.com> wrote:> Hi, > > I am new in the Gluster world but I got a problem in my hands. > > Actually, as the client was disconnected and re-connected to the server, > it may be enough reason to trigger the self-healing. That was my first > thought. > The servers may not be synchronized (Timeout, Pending actions, etc). > However a more experient co-worker told me that it was not enough reason. > Anyway, what I am really investigating is the timeout message below, it > might be connected with the self-healing. Maybe something went wrong during > the > self-healing (File system corrupted!?). So, that's why I need to > understand when the self-healing is triggered. > > [2012-05-22 21:57:09.627220] E [rpc-clnt.c:199:call_bail] 0-client-6: > bailing out frame type(GlusterFS 3.1) op(FINODELK(30)) xid = 0x358492x sent > = 2012-05-22 21:27:00.394792. timeout = 1800 > > Basically we have two FTP servers that upload data to the storages and > Glusterfs will write files synchronously on all replicas. > So the files become available to On Demand streaming. > > > Customer -> FTP Server -> Storage <-> Gluster Clienter <-> Viewer >It's getting clearer. Focusing on the Gluster infrastructures: there are Gluster servers and Gluster clients. Where are them in the above schema? Are the storage servers the gluster servers? You know you shall not change anything directly in a brick, only through Gluster native client, Gluster NFS or Gluster CIFS? Regards, Rodrigo -------------- next part -------------- An HTML attachment was scrubbed... URL: <supercolony.gluster.org/pipermail/gluster-users/attachments/20120529/eecdf94f/attachment.html>