Displaying 20 results from an estimated 3000 matches similar to: "balancing redundancy with space utilization"
2008 Oct 31
3
Problem with xlator
?Hi,
I have the next scenario:
#############################################################################
SERVER SIDE? (64 bit architecture)
?#############################################################################
Two Storage Machines with:
HARDWARE
DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache,
Bus
1333FSB
RAM 4 GB FB 667Mhz (2x2Gb)
8 HDD 1 TB,
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount:
ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs
One of the processes usually dies pretty quickly like this:
[608] open
2008 Oct 22
1
tar: File changed as we read it
tar: blah.bleh: file changed as we read it
I have a file (two files actually) with different timestamps on the AFR
backends -- I presume because the file timestamp was set to the current time,
when the last write operation completed and there is some minor clock skew or
network lag. tar notices this intermittently, depending on which mirror
handles the request.
It is a little distracting
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2023 Feb 07
1
File\Directory not healing
Hi All.
Hoping you can help me with a healing problem. I have one file which didn't
self heal.
it looks to be a problem with a directory in the path as one node says it's
dirty. I have a replica volume with arbiter
This is what the 3 nodes say. One brick on each
Node1
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> > Any way in this situation to check which file will be healed from
> > which brick before reconnecting ? Using some getfattr tricks ?
> Yes, there are afr xattrs that determine the heal direction for each
> file. The good copy
2024 Jun 26
1
Confusion supreme
I should add that in /var/lib/glusterd/vols/gv0/gv0-shd.vol and
in all other configs in /var/lib/glusterd/ on all three machines
the nodes are consistently named
client-2: zephyrosaurus
client-3: alvarezsaurus
client-4: nanosaurus
This is normal. It was the second time that a brick was removed,
so client-0 and client-1 are gone.
So the problem is the file attibutes themselves. And there I see
2024 Jun 26
1
Confusion supreme
Hello all
I have a mail store on a volume replica 3 with no arbiter. A while
ago the disk of one of the bricks failed and I was several days
late to notice it. When I did, I removed that brick from the volume,
replaced the failed disk, updated the OS on that machine from el8
to el9 and gluster on all three nodes from 10.3 to 11.1, added back
the brick and started a heal. Things appeared to work
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote:
>
> 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
>
> Could you check if the self-heal daemon on all nodes is connected
> to the 3 bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi.
I'm evaluating GlusterFS for our DFS implementation, and wondered how it
compares to KFS/CloudStore?
These features here look especially nice (
http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist
in GlusterFS as well?
Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2023 Feb 14
1
File\Directory not healing
I've touched the directory one level above the directory with the I\O issue
as the one above that is the one showing as dirty.
It hasn't healed. Should the self heal daemon automatically kick in here?
Is there anything else I can do?
Thanks
David
On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> You can always mount it locally on any of the
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2