similar to: stripped volume in 3.4.0qa5 with horrible read performance

Displaying 20 results from an estimated 3000 matches similar to: "stripped volume in 3.4.0qa5 with horrible read performance"

2013 Oct 23
3
Samba vfs_glusterfs Quota Support?
Hi All, I'm setting up a gluster cluster that will be accessed via smb. I was hoping that the quotas. I've configured a quota on the path itself: # gluster volume quota gfsv0 list path limit_set size ---------------------------------------------------------------------------------- /shares/testsharedave 10GB 8.0KB And I've
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling that occasionally gets reported had this to say [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED distro [11:50] <nissim> so I moved to CentOS 6.3 [11:51] <nissim> next I removed all distibution related infiniband rpms and build the latest OFED package [11:52] <nissim>
2008 Oct 21
1
behavior of ALU Scheduler
Hello, I have one question about the ALU scheduler. If for example I have one UNIFY volume which is using ALU scheduler with the following config: volume unify type cluster/unify option namespace afr-ns option scheduler rr option scheduler alu # use the ALU scheduler option alu.limits.min-free-disk 3% # Don't create files one a volume with less than 5% free diskspace
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2013 Dec 04
1
Testing failover and recovery
Hello, I've found GlusterFS to be an interesting project. Not so much experience of it (although from similar usecases with DRBD+NFS setups) so I setup some testcase to try out failover and recovery. For this I have a setup with two glusterfs servers (each is a VM) and one client (also a VM). I'm using GlusterFS 3.4 btw. The servers manages a gluster volume created as: gluster volume
2012 Dec 17
2
Transport endpoint
Hi, I've got Gluster error: Transport endpoint not connected. It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error. Logs on the server side (on reverse time order): [2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2018 Apr 11
2
Unreasonably poor performance of replicated volumes
Hello everybody! I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are actually virtual machines located on 3 separate physical XenServer7.1 servers) They are all connected via infiniband network. Iperf3 shows around *23 Gbit/s network bandwidth *between each 2 of them. Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical volume created on top of it, formatted
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
Guess you went through user lists and tried something like this already http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I have a same exact setup and below is as far as it went after months of trail and error. We all have somewhat same setup and same issue with this - you can find same post as yours on the daily basis. On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
Thanks a lot for your reply! You guessed it right though - mailing lists, various blogs, documentation, videos and even source code at this point. Changing some off the options does make performance slightly better, but nothing particularly groundbreaking. So, if I understand you correctly, no one has yet managed to get acceptable performance (relative to underlying hardware capabilities) with
2008 Sep 17
6
Poor performance with AFR
I have just finished my first steps with glusterfs. Realizing in principle what I wanted to do (including installation from source) was astonishingly easy; however, the performance is extremely poor. Thus, I'd appreciate comments and suggestions what to do/try next. * Operating system is Ubuntu 8.04.1 (32bit on servers, 64bit on client) * glusterfs is 1.3.12; compiled from source; *
2010 Mar 24
3
mounting gfs partition hangs
Hi, I have configured two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name:
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy, The heal command is basically used to heal any mismatching contents between replica copies of the files. For the command "gluster volume heal <volname>" to succeed, you should have the self-heal-daemon running, which is true only if your volume is of type replicate/disperse. In your case you have a plain distribute volume where you do not store the replica of any
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> wrote: > Hi Karthik, > > > Thanks a lot for the explanation. > > Does it mean a distributed volume health can be checked only by "gluster > volume status " command? > Yes. I am not aware of any other command which can give the status of plain distribute volume which is similar to
2010 Apr 30
5
Mount drbd/gfs logical volume from domU
Hi list, I setup on 2 Xen Dom0s drbd/gfs a logical volume, this works as primary/primary so both DomUs will be able to write on them at the same time. But I dont know how to mount them from my domUs, I can see them with fdisk -l. The partition is /dev/xvdb1 SHould I install gfs on domUs and mount them on each as gfs partitions? [root@p3x0501 ~]# fdisk -l Disk /dev/xvda: 5368 MB, 5368709120
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's
2007 Jun 11
3
domU on gfs
Hey All, I have a cluster setup and exporting gfs storage everything is working ok(as far as I know anyway). But instead of mounting the gfs storage I want the xen guest to be installed on the shared gfs storage. But with my current setup when I install the domU on the gfs storage it changes it to ext3. Is it possible this way or does the domU have to be on an ext file system?