Displaying 20 results from an estimated 7000 matches similar to: "Gluster Self Heal"
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2013 May 10
2
Self-heal and high load
Hi all,
I'm pretty new to Gluster, and the company I work for uses it for storage
across 2 data centres. An issue has cropped up fairly recently with regards
to the self-heal mechanism.
Occasionally the connection between these 2 Gluster servers breaks or drops
momentarily. Due to the nature of the business it's highly likely that
files have been written during this time. When the
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi,
I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used
by 4 clients.
Sometimes from some clients I can't access some of the files. After I force
a full heal on the brick I see several files healed. Is this behavior
normal?
Thanks
--
Paulo Silva <paulojjs at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2012 May 03
1
GlusterFS 3.3 beta on Debian
Hi,
I'm attempting to install the 3.3 beta3 on Debian.
The files are located in a directory that looks like they were built for
Debian Lenny, here:
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/
Note the 5.0.3 at the end of the path..
However, when attempting to install the .deb file, it gives an error
about package libssl1.0.0 being missing.
That
2013 Feb 27
4
GlusterFS performance
Hello!
I have GlusterFS installation with parameters:
- 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf)
- Distributed-replicated volume with 4 bricks and 2x4 redundancy formula.
- Replicated volume with 2 bricks and 2x2 formula.
I found some trouble: if I try to copy huge amount of files (94000 files,
3Gb size), this process takes terribly long time (from 20 to 40 minutes). I
2013 Mar 08
1
Debian Squeeze packages available for Gluster 3.4.0-alpha2
I've made packages for Debian Squeeze for Gluster 3.4.0-alpha2,
they are available on
http://torbjorn-dev.trollweb.net/gluster-3.4.0alpha2-debs/.
They built and installed successfully, and have been running nicely
for a couple of hours,
but your mileage may vary.
The Debian packaging is on
http://torbjorn-dev.trollweb.net/gluster-3.4.0alpha2-debs/glusterfs-3.4.0-debian.tar.gz.
I took the
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
Hi,
I eventually installed three Debian unstable machines, so I could
install the GlusterFS 3.3 beta3.
I have a question about the self-heal daemon.
I'm trying to get a volume which is replicated, with two bricks.
I started up the volume, wrote some data, then killed one machine, and
then wrote more data to a few folders from the client machine.
Then I restarted the second brick server.
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi,
I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ?
Thanks & Regards,
Bobby Jacob
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts,
We're running glusterfs 3.3 and we have met file permission probelems after
gluster volume rebalance. Files got stick permissions T--------- after
rebalance which break our client normal fops unexpectedly.
Any one known this issue?
Thank you for your help.
--
???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all,
I accidentally removed the brick directory of a volume on one node, the
replica for this volume is 2.
now the situation is , there is no corresponding glusterfsd process on
this node, and 'glusterfs volume status' shows that the brick is offline,
like this:
Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513
Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi,
When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file.
Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4
This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2012 Nov 26
1
Heal not working
Hi,
I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2013 Sep 19
2
Support for GlusterFS
Hi,
Is there an option to procure support for glusterfs deployment. ? As we moving into core production scenarios with glusterfs in mind, it would be slightly relieving to have this confirmation !!
Thanks & Regards,
Bobby Jacob
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi,
Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .????
Thanks & Regards,
Bobby Jacob
Senior Technical Systems Engineer | eGroup
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts,
Recently we have encountered a self-heal daemon crash issue after
rebalanced volume.
Crash stack bellow:
+------------------------------------------------------------------------------+
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-03-14 16:33:50
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread
2013 Sep 11
1
Possible memory leak ?
Hi,
I am using gluster 3.3.1 on Centos 6, installed from
the glusterfs-3.3.1-1.el6.x86_64.rpm rpms.
I am seeing the Committed_AS memory continually increasing and the
processes using the memory are glusterfsd instances.
see http://imgur.com/K3dalTW for graph.
Both nodes are exhibiting the same behaviour, I have tried the suggested
echo 2 > /proc/sys/vm/drop_caches
but it made no
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users:
sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks
a sudden power loss casued hard reboot - now the volume does not start
Glusterfs- 3.3.1 on Centos 6.1 transport: TCP
sharing volume over NFS for VM storage - VHD Files
Type: distributed - only 1 node (brick)
XFS (LVM)
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion
Self-heal daemon is not running. Check self-heal daemon log file.
gluster>
Is there a specific log? When i check /var/log/glusterfs/glustershd.log
glustershd.log:[2013-04-30 15:51:40.463259] E
[afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0:
Stopping crawl for dyn_coldfusion-client-1 , subvol went down
Is there a specific log? When
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all,
Right out of the blue glusterfs is not working fine any more every now end
the it stops working telling me,
Endpoint not connected and writing core files:
[root at tuepdc /]# file core.15288
core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV),
SVR4-style, from 'glusterfs'
My Version:
[root at tuepdc /]# glusterfs --version
glusterfs 3.2.0 built on Apr 22 2011
2013 Oct 26
1
Crashing (signal received: 11)
I am seeing this crashing happening, I am working on the self healing errors as well, not sure if the two are related. I would appreciate any direction on trying to resolve the issue, I have clients dropping connection daily.
[2013-10-26 15:35:46.935903] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-ENTV04EP-replicate-9: background meta-data self-heal failed on /
[2013-10-26