similar to: self-heal trouble after changing arbiter brick

Displaying 20 results from an estimated 700 matches similar to: "self-heal trouble after changing arbiter brick"

2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2017 Jul 31
3
gluster volume 3.10.4 hangs
Hi folks, I'm running a simple gluster setup with a single volume replicated at two servers, as follows: Volume Name: gv0 Type: Replicate Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: sst0:/var/glusterfs Brick2: sst2:/var/glusterfs Options Reconfigured: cluster.self-heal-daemon: enable
2015 Jul 21
6
[LLVMdev] GlobalsModRef (and thus LTO) is completely broken
Based on function names and structures, this is some version of GCC :) Any way you can post the entire .ll file? Because it's globalsmodref, it's hard to debug without the other functions, since it goes over all the functions to determine address takenness, etc :) On Tue, Jul 21, 2015 at 3:23 PM, Michael Zolotukhin <mzolotukhin at apple.com> wrote: > Hi Chandler, > > We
2015 Jul 17
2
[LLVMdev] GlobalsModRef (and thus LTO) is completely broken
On Fri, Jul 17, 2015 at 9:13 AM Evgeny Astigeevich < evgeny.astigeevich at arm.com> wrote: > It’s Dhrystone. > Dhrystone has historically not been a good indicator of real-world performance fluctuations, especially at this small of a shift. I'd like to see if we see any fluctuation on larger and more realistic application benchmarks. One advantage of the flag being set is that we
2007 Jul 20
5
[LLVMdev] Seg faulting on vector ops
Hola LLVMers, I'm looking to make use of the vectorization primitives in the Intel chip with the code we generate from LLVM and so I've started experimenting with it. What is the state of the machine code generated for vectors? In my tinkering, I seem to be getting some wonky machine instructions, but I'm most likely just doing something wrong and I'm hoping you can set me in
2014 Dec 08
3
[LLVMdev] Incorrect loop optimization when building the Linux kernel
I was trying to build the Linux kernel with clang and observed a crash due to incorrect loop optimization: drivers/base/firmware_class.c extern struct builtin_fw __start_builtin_fw[]; extern struct builtin_fw __end_builtin_fw[]; static bool fw_get_builtin_firmware(struct firmware *fw, const char *name) { struct builtin_fw *b_fw; for (b_fw = __start_builtin_fw; b_fw != __end_builtin_fw;
2017 Aug 31
2
Manually delete .glusterfs/changelogs directory ?
Hi Mabi, If you will not use that geo-replication volume session again, I believe it is safe to manually delete the files in the brick directory using rm -rf. However, the gluster documentation specifies that if the session is to be permanently deleted, this is the command to use: gluster volume geo-replication gv1 snode1::gv2 delete reset-sync-time
2007 Jul 21
0
[LLVMdev] Seg faulting on vector ops
On Fri, 20 Jul 2007, Chuck Rose III wrote: > I'm looking to make use of the vectorization primitives in the Intel > chip with the code we generate from LLVM and so I've started > experimenting with it. What is the state of the machine code generated > for vectors? In my tinkering, I seem to be getting some wonky machine > instructions, but I'm most likely just doing
2007 Jul 24
2
[LLVMdev] Seg faulting on vector ops
Hrm. This problem shouldn't be target specific. I am pretty sure prologue / epilogue inserter aligns stack correctly if there are stack objects with greater than default stack alignment requirement. Seems to be the initial alloca() instruction should specify 16 byte alignment? Evan On Jul 21, 2007, at 2:51 PM, Chris Lattner wrote: > On Fri, 20 Jul 2007, Chuck Rose III wrote:
2015 Dec 03
3
Function attributes for LibFunc and its impact on GlobalsAA
----- Original Message ----- > From: "James Molloy via llvm-dev" <llvm-dev at lists.llvm.org> > To: "Vaivaswatha Nagaraj" <vn at compilertree.com> > Cc: "LLVM Dev" <llvm-dev at lists.llvm.org> > Sent: Thursday, December 3, 2015 4:41:46 AM > Subject: Re: [llvm-dev] Function attributes for LibFunc and its impact on GlobalsAA > >
2007 Jul 20
0
[LLVMdev] Seg faulting on vector ops
Hi Chuck! On Jul 20, 2007, at 11:36 AM, Chuck Rose III wrote: > Hola LLVMers, > > > > I’m looking to make use of the vectorization primitives in the > Intel chip with the code we generate from LLVM and so I’ve started > experimenting with it. What is the state of the machine code > generated for vectors? In my tinkering, I seem to be getting some > wonky
2007 Jul 26
0
[LLVMdev] Seg faulting on vector ops
I am fairly certain this is right. Chuck, can you do a quick experiment for me? Go back to your original code but make sure the alloca instruction specify 16-byte alignment. The code should work. If not, please file a bug. Thanks, Evan On Jul 24, 2007, at 1:58 PM, Evan Cheng wrote: > Hrm. This problem shouldn't be target specific. I am pretty sure > prologue / epilogue inserter
2017 Aug 30
0
Manually delete .glusterfs/changelogs directory ?
Hi, has anyone any advice to give about my question below? Thanks! > -------- Original Message -------- > Subject: Manually delete .glusterfs/changelogs directory ? > Local Time: August 16, 2017 5:59 PM > UTC Time: August 16, 2017 3:59 PM > From: mabi at protonmail.ch > To: Gluster Users <gluster-users at gluster.org> > > Hello, > > I just deleted (permanently)
2017 Aug 16
2
Manually delete .glusterfs/changelogs directory ?
Hello, I just deleted (permanently) my geo-replication session using the following command: gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete and noticed that the .glusterfs/changelogs on my volume still exists. Is it safe to delete the whole directly myself with "rm -rf .glusterfs/changelogs" ? As far as I understand the CHANGELOG.* files are only needed
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it: https://bugzilla.redhat.com/show_bug.cgi?id=1491059 https://bugzilla.redhat.com/show_bug.cgi?id=1491060 Please see the above bugs for full detail. In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are: a. brick pid file leaves stale pid and brick fails
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's