similar to: Fwd: Troubleshooting glusterfs

Displaying 20 results from an estimated 2000 matches similar to: "Fwd: Troubleshooting glusterfs"

2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > > I see a lot of the following messages in the logs: > [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] > 0-glusterfs: No change in volfile,continuing > [2018-02-04 07:41:16.189349] W [MSGID: 109011] > [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you so much, I think we are close to build a stable storage solution according to your recommendations. Here's our rebalance log - please don't pay attention to error messages after 9AM - this is when we manually destroyed volume to recreate it for further testing. Also all remove-brick operations you could see in the log were executed manually when recreating volume.
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out! We changed our configuration and after having a successful test yesterday we have run into new issue today. The test including moderate read/write (~20-30 Mb/s) and scaling the storage was running about 3 hours and at some moment system got stuck: On the user level there are such errors when trying to work with filesystem: OSError:
2018 Feb 04
1
Fwd: Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup: Distributed volume without replication. Sharding enabled. # cat /etc/centos-release CentOS release 6.9 (Final) # glusterfs --version glusterfs 3.12.3 [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925 Status:
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup: Distributed volume without replication. Sharding enabled. [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925 Status: Started Snapshot Count: 0 Number of Bricks: 27 Transport-type: tcp Bricks: Brick1:
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2018 Jan 16
2
Strange messages in mnt-xxx.log
Hi, I'm testing gluster 3.12.4 and, by inspecting log files /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines saying: [2018-01-15 09:45:41.066914] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2018-01-15 09:45:45.755021] I [MSGID: 109063]
2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi, On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Hi, > > I'm testing gluster 3.12.4 and, by inspecting log files > /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines > saying: > > [2018-01-15 09:45:41.066914] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize]
2018 May 21
2
split brain? but where?
Hi, I seem to have a split brain issue, but I cannot figure out where this is and what it is, can someone help me pls, I cant find what to fix here. ========== root at salt-001:~# salt gluster* cmd.run 'df -h' glusterp2.graywitch.co.nz: Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this, I cant find anything so far telling me how to fix it. Looking at http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1
2017 Nov 06
1
回复: glusterfs segmentation fault in rdma mode
Hi ,all We found a strange problem. Some clients worked normally while some clients couldn't access sepcial files. For exmaple, Client A couldn't create the directory xxx, but Client B could. However, if Client B created the directory, Client A could acess it and even deleted it. But Client A still couldn't create the same directory later. If I changed the directory name, Client A
2018 May 22
2
split brain? but where?
Hi, Which version of gluster you are using? You can find which file is that using the following command find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> Please provide the getfatr output of the file which is in split brain. The steps to recover from split-brain can be found here,
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info: Volume Name: gv2a2 Type: Replicate Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gluster1:/bricks/brick2/gv2a2 Brick2: gluster3:/bricks/brick3/gv2a2 Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter) Options Reconfigured: storage.owner-gid: 107
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Here's the volume info: > > > Volume Name: gv2a2 > Type: Replicate > Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1:
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is? https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/ On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is, can someone help me pls, I cant find what to fix here. >
2018 May 22
0
split brain? but where?
I tried this already. 8><--- [root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Filesystem
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up, 8><--- [root at glusterp2 fb]# pwd /bricks/brick1/gv0/.glusterfs/ea/fb [root at glusterp2 fb]# ls -al total 3130892 drwx------. 2 root root 64 May 22 13:01 . drwx------. 4 root root 24 May 8 14:27 .. -rw-------. 1 root root 3294887936 May 4 11:07 eafb8799-4e7a-4264-9213-26997c5a4693 -rw-r--r--. 1 root
2017 Nov 04
0
glusterfs segmentation fault in rdma mode
This looks like there could be some some problem requesting / leaking / whatever memory but without looking at the core its tought to tell for sure. Note: /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618] Can you open up a bugzilla and get us the core file to review? -b ----- Original Message ----- > From: "???" <21291285 at qq.com> > To:
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently