similar to: Trace log of unify when glusterfs freezes

Displaying 20 results from an estimated 100 matches similar to: "Trace log of unify when glusterfs freezes"

2008 Aug 24
2
Unusual bug in glusterfsd
Hi, I'm rather new to this project, having stumbled across it earlier this afternoon, so forgive me if I'm still trying to find my way around. I was in the need of an alternative to NFS that would let me spread the task of sharing my downloaded source code files across a couple of boxes, and GlusterFS looked like a great candidate, having had no luck with Coda or OpenAFS. I also want
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2012 Feb 29
2
peer probe fails
Hi, Unable to do peer probe... and unable to figure out whats the reason from the gluster log. can someone help ? 1) This is what i was trying... gluster> peer probe llm19.in.ibm.com Probe unsuccessful Probe returned with unknown errno 107 gluster> peer probe 9.124.111.25 Probe unsuccessful Probe returned with unknown errno 107 gluster> peer status Number of Peers: 1 Hostname:
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra, Sorry for the late follow up. I have some more data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048]
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :). This looks to be a genuine issue which requires some effort in fixing it. Can you file a bug? I need following information attached to bug: * Client and bricks logs. If you can reproduce the issue, please set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If you cannot reproduce the issue or if you cannot accommodate such big logs, please set
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a problem joining four Gluster 3.10 nodes to an existing > Gluster 3.8 nodes. My understanding that this should work and not be > too much of a problem. > > Peer robe is successful but the node is rejected: > > gluster> peer detach elkpinfglt07 > peer
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com> wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. > > It actually began as the same problem with a different peer. I noticed > with (call it) gluster-2, when I couldn't make a new volume. I compared > /var/lib/glusterd between them, and
2008 Nov 04
1
fuse_setlk_cbk error
I'm building a two node cluster to run vserver systems on. I've setup glusterfs with this config: # node a volume data-posix type storage/posix option directory /export/cluster end-volume volume data1 type features/posix-locks subvolumes data-posix end-volume volume data2 type protocol/client option transport-type tcp/client option remote-host
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in "/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with glusterd.logs and command-history. Thanks Gaurav On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas, > same old same > in log of the probing peer I see: > ... > 2017-08-29
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
Pranith, Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes and then start glusterd. After that the volume start succeeded. Thanks and Regards, Ram From: Pranith
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2011 Jan 09
1
gluster peer probe
Hello everyone, So this is my first email here. Recently I have downloaded glusterfs-3.1 and tried to install it on my four servers. I did not have any problems with the installation, but the configuration is a little bit unclear to me. I would like to ask you, if perhaps you had the same problems as I encountered and if yes, then how did you resolve your problem ? My configuration : 5 servers
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19 Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Friday, July 07, 2017 11:54 AM To: Ankireddypalle Reddy Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid for file %s", real_path); 3 op_ret = -1; 4 goto
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Pranith, > > Thanks for looking in to the issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation: http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/ On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote: > Hello, > > i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All > machines have same /etc/hosts. > > node1 hostname > pri.ostechnix.lan
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue. Info file on node 10.5.6.17 consist of an additional property "tier-enabled" which is not present in info file from other 3 nodes, hence when gluster peer probe call is made, in order to maintain consistency across the cluster cksum is compared. In this case as both files are different leading to different cksum, causing state in