similar to: Poor performance with AFR

Displaying 20 results from an estimated 1000 matches similar to: "Poor performance with AFR"

2010 Feb 16
1
Migrate from an NFS storage to GlusterFS
Hi - I already have an NFS server in production which shares Web data for a 4-node Apache cluster. I'd like to switch to GlusterFS. Do I have to copy the files from the NFS storage to a GlusterFS one, or may it work if I just install GlusterFS on that server, configuring a GFS volume to the existing storage directory (assuming, of course, the NFS server is shuuted down and not used
2011 Sep 06
1
Inconsistent md5sum of replicated file
I was wondering if anyone would be able to shed some light on how a file could end up with inconsistent md5sums on Gluster backend storage. Our configuration is running on Gluster v3.1.5 in a distribute-replicate setup consisting of 8 bricks. Our OS is Red Hat 5.6 x86_64. Backend storage is an ext3 RAID 5. The 8 bricks are in RR DNS and are mounted for reading/writing via NFS automounts.
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2010 Nov 11
1
Possible split-brain
Hi all, I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client: [root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi... Started playing with gluster. And the heal functions is my "target" for testing. Short description of my test ---------------------------- * 4 replicas on single machine * glusterfs mounted locally * Create file on glusterfs-mounted directory: date >data.txt * Append to file on one of the bricks: hostname >>data.txt * Trigger a self-heal with: stat data.txt =>
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals
2009 Jan 07
12
glusterfs alternative ? :P
I know that this is not the appropriate place :). You know someone can alternative to gluserfs ?:) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090107/63b68a0d/attachment.html>
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the > command? Does it complain anything about these files? > No, glustershd.log is clean, no extra log after command on all 3 nodes > 2. Are these 12 files also present in the 3rd data brick?
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2008 Sep 05
8
Gluster update | need your support
Dear Members, Even though Gluster team is growing at a steady phase, our aggressive development schedule out phases our resources. We need to expand and also maintain a 1:1 developer / QA engineer ratio. Our major development focus in the next 8 months will be towards: * Large scale regression tests (24/7/365) * Web based monitoring and management * Hot upgrade/add/remove of storage nodes
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've been playing with a variety of configs (AFR, AFR with Unify) on a two server setup. Everything seems to work well, but performance (creating files, reading files, appending to files) is very slow. Using the same configs on two non-software raid machines shows significant performance increases. Before I go a
2011 Aug 24
1
Input/output error
Hi, everyone. Its nice meeting you. I am poor at English.... I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want to change from gluster mount to nfs mount. I have installed GlusterFS 3.2.1 one week ago,and replication 2 server. OS:CentOS5.5 64bit RPM:glusterfs-core-3.2.1-1 glusterfs-fuse-3.2.1-1 command gluster volume create syncdata replica 2 transport tcp
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > On 11/16/2017 12:54 PM, Daniel Berteaud wrote: > > Any way in this situation to check which file will be healed from > > which brick before reconnecting ? Using some getfattr tricks ? > Yes, there are afr xattrs that determine the heal direction for each > file. The good copy
2023 Feb 07
1
File\Directory not healing
Hi All. Hoping you can help me with a healing problem. I have one file which didn't self heal. it looks to be a problem with a directory in the path as one node says it's dirty. I have a replica volume with arbiter This is what the 3 nodes say. One brick on each Node1 getfattr -d -m . -e hex /path/to/dir | grep afr getfattr: Removing leading '/' from absolute path names
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone, I'm playing with Gluster3.3b2, and everything is working fine when uploading stuff through swift. However, when I enable quotas on Gluster, I randomly get permission errors. Sometimes I can upload files, most times I can't. I'm mounting the partitions with the acl flag, I've tried wiping out everything and starting from scratch, same result. As soon as I
2011 Jun 09
1
NFS problem
Hi, I got the same problem as Juergen, My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0 Volume Name: poolsave Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/soft/gluster-data Brick2: ylal2960:/soft/gluster-data Options Reconfigured: diagnostics.brick-log-level: DEBUG network.ping-timeout: 20 performance.cache-size: 512MB