similar to: Clear heal statistics

Displaying 20 results from an estimated 10000 matches similar to: "Clear heal statistics"

2018 Jan 07
1
Clear heal statistics
Is there any way to clear the historic statistic from the command "gluster volume heal <volume_name> statistics" ? It seems the command takes longer and longer to run each time it is used, to the point where it times out and no longer works. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick, If one of the self heal process is down, will the statstics heal-count command work? On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > 1) one peer, out of four, got separated from the network, from the rest of > the cluster. > 2) that unavailable(while it was unavailable) peer got detached with > "gluster peer detach" command
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and gluster peer status. On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi all > > this: > $ vol heal $_vol info > outputs ok and exit code is 0 > But if I want to see statistics: > $ gluster vol heal $_vol statistics > Gathering crawl statistics on volume GROUP-WORK
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network, from the rest of the cluster. 2) that unavailable(while it was unavailable) peer got detached with "gluster peer detach" command which succeeded, so now cluster comprise of three peers 3) Self-heal daemon (for some reason) does not start(with an attempt to restart glusted) on the peer which probed that fourth peer. 4) fourth
2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello I have a large cluster in which every node is logging: I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now At a rate of of around 4 or 5 per second per node, which is adding up to a lot of messages. This seems to happen while my cluster is idle. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote: > Hello > > I have a large cluster in which every node is logging: > > I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - > disconnecting now > > At a rate of of around 4 or 5 per second per node, which is adding up to a > lot of messages. This seems to happen while my
2017 Sep 04
2
heal info OK but statistics not working
hi all this: $ vol heal $_vol info outputs ok and exit code is 0 But if I want to see statistics: $ gluster vol heal $_vol statistics Gathering crawl statistics on volume GROUP-WORK has been unsuccessful on bricks that are down. Please check if all brick processes are running. I suspect - gluster inability to cope with a situation where one peer(which is not even a brick for a single vol on
2017 Nov 21
1
Brick and Subvolume Info
Hello I have a Distributed-Replicate volume and I would like to know if it is possible to see what sub-volume a brick belongs to, eg: A Distributed-Replicate volume containing: Number of Bricks: 2 x 2 = 4 Brick1: node1.localdomain:/mnt/data1/brick1 Brick2: node2.localdomain:/mnt/data1/brick1 Brick3: node1.localdomain:/mnt/data2/brick2 Brick4: node2.localdomain:/mnt/data2/brick2 Is it possible
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2012 Nov 26
1
Heal not working
Hi, I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2017 Sep 21
0
Heal Info Shows Split Brain, but "file not in split brain" when attempted heal
Hello I am using Glusterfs 3.10.5 on CentOS7. A replicated distributed volume with a dist-rep hot tier. During data migration, we noticed the tierd.log on one of nodes was huge. Upon review it seemed to be stuck on a certain set of files. Running a "gluster vol heal VOL info" showed that those same files caused problems in the tier, were in split brain. So we went to fix split
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while! I have the following stats: 4085169 files in both bricks3162940 files only have a single hard link. All of the files exist on both servers. bmidata2 (below) WAS running when bmidata1 died. gluster volume heal clifford statistics heal-countGathering count of entries to be healed on volume clifford has been successful Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2014 Sep 05
2
glusterfs replica volume self heal dir very slow!!why?
Hi all? I do the following test? I create a glusterfs replica volume (replica count is 2 ) with two server node(server A and server B)? then mount the volume in client node? then? I shut down the network of server A node? in client node? I copy a dir?which has a lot of small files?? the dir size is 2.9GByte? when copy finish? I start the network of server A node? now?
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2017 Sep 17
0
Volume Heal issue
I am using gluster 3.8.12, the default on Centos 7.3 (I will update to 3.10 at some moment) On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote: > Hi all, > > I have a replica 3 with 1 arbiter. > > I see the last days that one file at a volume is always showing as needing > healing: > > gluster volume heal vms info > Brick
2012 Oct 10
1
Clearing the heal-failed and split-brain status messages
Hello, Is it possible to clear the heal-failed and split-brain status in a nice way? I would personally like if gluster would automatically remove failed states when they are resolved ( if future reference is needed you can always look at the logs) I'm asking because I'd like to monitor these for issues. The monitoring script would be trivial to setup if the failed status is / can be
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10. [root at tpc-cent-glus1-081017 ~]#
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik, Thanks a lot for the explanation. Does it mean a distributed volume health can be checked only by "gluster volume status " command? And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)? Regards, Anatoliy
2017 Sep 17
2
Volume Heal issue
Hi all, I have a replica 3 with 1 arbiter. I see the last days that one file at a volume is always showing as needing healing: gluster volume heal vms info Brick gluster0:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster1:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster2:/gluster/vms/brick *<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>* Status: