Displaying 20 results from an estimated 395 matches for "afre".
Did you mean:
afr
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount:
ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
One of the processes usually dies pretty quickly like this:
[608] open
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote:
> Hi,
>
> Thank you for the answer and sorry for delay:
>
> 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
> 1. What does the glustershd.log say on all 3 nodes when you run
> the command? Does it complain anything about these files?
>
>
>
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote:
>
> 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
>
> Could you check if the self-heal daemon on all nodes is connected
> to the 3 bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2009 Jan 07
12
glusterfs alternative ? :P
I know that this is not the appropriate place :). You know someone can
alternative to gluserfs ?:)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090107/63b68a0d/attachment.html>
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after command on all 3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
2023 Feb 07
1
File\Directory not healing
Hi All.
Hoping you can help me with a healing problem. I have one file which didn't
self heal.
it looks to be a problem with a directory in the path as one node says it's
dirty. I have a replica volume with arbiter
This is what the 3 nodes say. One brick on each
Node1
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone:
We have a glusterfs clusters, version is 3.2.7. The volume info is as below:
Volume Name: gfs1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 94 x 3 = 282
Transport-type: tcp
We native mount the volume in all nodes. When we access the file
?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain.
While we can access the same file on
2008 Sep 05
8
Gluster update | need your support
Dear Members,
Even though Gluster team is growing at a steady phase, our aggressive development
schedule out phases our resources. We need to expand and also maintain a 1:1 developer /
QA engineer ratio. Our major development focus in the next 8 months will be towards:
* Large scale regression tests (24/7/365)
* Web based monitoring and management
* Hot upgrade/add/remove of storage nodes
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> > Any way in this situation to check which file will be healed from
> > which brick before reconnecting ? Using some getfattr tricks ?
> Yes, there are afr xattrs that determine the heal direction for each
> file. The good copy
2012 Mar 12
0
Data consistency with Gluster 3.2.5
I have set up a replicated, four-node gluster config for a web farm. The
idea is that each web node is its own Gluster server, and will have its
own copy of the entire web root locally. It then serves the cluster to
itself via a mount. We're running it over dual GigE NICs bonded.
The problem I am having is when we switch live traffic to nodes in the
cluster, they almost immediately get
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've
been playing with a variety of configs (AFR, AFR with Unify) on a two server
setup. Everything seems to work well, but performance (creating files,
reading files, appending to files) is very slow. Using the same configs on
two non-software raid machines shows significant performance increases.
Before I go a
2024 Jun 26
1
Confusion supreme
I should add that in /var/lib/glusterd/vols/gv0/gv0-shd.vol and
in all other configs in /var/lib/glusterd/ on all three machines
the nodes are consistently named
client-2: zephyrosaurus
client-3: alvarezsaurus
client-4: nanosaurus
This is normal. It was the second time that a brick was removed,
so client-0 and client-1 are gone.
So the problem is the file attibutes themselves. And there I see