Displaying 20 results from an estimated 10000 matches similar to: "glusterfs replica volume self heal dir very slow!!why?"
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello,
I''m trying to build a replica volume, on two servers.
The servers are: blade6 and blade7. (another blade1 in the peer, but with
no volumes)
The volume seems ok, but I cannot mount it from NFS.
Here are some logs:
[root@blade6 stor1]# df -h
/dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1
[root@blade7 stor1]# df -h
/dev/mapper/gluster_fast
2013 Jun 17
1
Ability to change replica count on an active volume
Hi, all
As the title
I found that gluster fs 3.3 has the ability to change replica count in the
official document:
http://www.gluster.org/community/documentation/index.php/WhatsNew3.3
But I couldnt find any manual about how to do it.
Has this feature been added already, or will be supported soon?
thanks.
Wang Li
-------------- next part --------------
An HTML attachment was scrubbed...
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2012 Nov 26
1
Heal not working
Hi,
I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2013 Jul 09
2
Gluster Self Heal
Hi,
I have a 2-node gluster with 3 TB storage.
1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes.
2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting.
Please advice on how I can maintain
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi,
I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used
by 4 clients.
Sometimes from some clients I can't access some of the files. After I force
a full heal on the brick I see several files healed. Is this behavior
normal?
Thanks
--
Paulo Silva <paulojjs at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion
Self-heal daemon is not running. Check self-heal daemon log file.
gluster>
Is there a specific log? When i check /var/log/glusterfs/glustershd.log
glustershd.log:[2013-04-30 15:51:40.463259] E
[afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0:
Stopping crawl for dyn_coldfusion-client-1 , subvol went down
Is there a specific log? When
2013 Sep 28
0
Gluster NFS Replicate bricks different size
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share
holds the qcow images of the VMs.
I recently nuked a whole replica brick in an 1x2 array (for numerous other
reasons including split-brain), the brick self healed and restored back to
the same state as its partner.
4 days later, they've become inbalanced. The direct `du` of the /brick are
showing different sizes by
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi,
I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ?
Thanks & Regards,
Bobby Jacob
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all,
as the title says - i'm looking to change a volume from dist/repl -> dist.
we're currently running 3.2.7. a few of questions for you gurus out there:
- is this possible to do on 3.2.7?
- is this possible to do with 3.4.1? (would involve upgrade)
- are there any pitfalls i should be aware of?
many thanks in advance,
regards,
paul
-------------- next part --------------
An
2013 Oct 06
0
Options to turn off/on for reliable virtual machinewrites & write performance
In a replicated cluster, the client writes to all replicas at the same time. This is likely while you are only getting half the speed for writes as its going to two servers and therefore maxing your gigabit network. That is, unless I am misunderstanding how you are measuring the 60MB/s write speed.
I don't have any advice on the other bits...sorry.
Todd
-----Original Message-----
From:
2013 Oct 25
1
GlusterFS 3.4 Fuse client Performace
Dear GlusterFS Engineer,
I have questions that my glusterfs server and fuse client
perform properly on below specification.
It can write only *65MB*/s through FUSE client to 1 glusterfs server (1
brick and no replica for 1 volume )
- NW bandwidth are enough for now. I've check it with iftop
- However it can write *120MB*/s when I mount nfs on the same volume.
Could anyone check if the
2013 Aug 23
1
Slow writing on mounted glusterfs volume via Samba
Hi guys,
I have configured gluster fs in replication mode in two ubuntu servers.
Windows users use samba sharing to access the mounted volume. Basically my
setup is that client machines on each site connect to its local file server
so it has the fattest connection. Two files servers are connected via VPN
tunnel which has really high bandwidth
Right now it is very slow to write files to the
2019 Nov 29
2
Healing completely loss file on replica 3 volume
I'm trying to manually garbage data on bricks (when the volume is
stopped) and then check whether healing is possible. For example:
Start:
# glusterd --debug
Bricks (on EXT4 mounted with 'rw,realtime'):
# mkdir /root/data0
# mkdir /root/data1
# mkdir /root/data2
Volume:
# gluster volume create gv0 replica 3 [local-ip]:/root/data0 [local-ip]:/root/data1 [local-ip]:/root/data2
2017 Jul 11
1
Replica 3 with arbiter - heal error?
Hello,
I have a Gluster 3.8.13 with replica 3 arbiter volume mounted and run
there a following script:
while true; do echo "$(date)" >> a.txt; sleep 2; done
After few seconds I add a rule to the firewall on the client, that
blocks access to node specified during mount e.g. if volume is mounted
with:
mount -t glusterfs -o backupvolfile-server=10.0.0.2 10.0.0.1:/vol /mnt/vol
I
2013 Nov 29
1
Self heal problem
Hi,
I have a glusterfs volume replicated on three nodes. I am planing to use
the volume as storage for vMware ESXi machines using NFS. The reason for
using tree nodes is to be able to configure Quorum and avoid
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd"
2013 May 10
2
Self-heal and high load
Hi all,
I'm pretty new to Gluster, and the company I work for uses it for storage
across 2 data centres. An issue has cropped up fairly recently with regards
to the self-heal mechanism.
Occasionally the connection between these 2 Gluster servers breaks or drops
momentarily. Due to the nature of the business it's highly likely that
files have been written during this time. When the
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all,
I accidentally removed the brick directory of a volume on one node, the
replica for this volume is 2.
now the situation is , there is no corresponding glusterfsd process on
this node, and 'glusterfs volume status' shows that the brick is offline,
like this:
Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513
Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2013 Sep 23
1
Mounting a sub directory of a glusterfs volume
I am not sure if posting with the subject copied from the webpage of
mail-list of an existing thread would loop my response under the same.
Apologies if it doesn't.
I am trying to figure a way to mount a directory within a gluster volume to
a web server. This directory is enabled with quota to limit a users' usage.
gluster config:
Volume Name: test-volume
features.limit-usage:
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of