similar to: Finding performance bottlenecks

Displaying 20 results from an estimated 700 matches similar to: "Finding performance bottlenecks"

2018 May 01
0
Finding performance bottlenecks
Hi, So is the KVM or Vmware as the host(s)? I basically have the same setup ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware using NFS disk was pretty slow (40% of a single disk) but this was over 1gb networking which was clearly saturating. Hence I am moving to KVM to use glusterfs hoping for better performance and bonding, it will be interesting to see
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote: > > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based) - gluster 3.8.12 - Replica 3 - VM Hosting Only - Sharded Storage Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on 2 nodes with no data loss - gluster for teh win! But its looking like I might have to replace the
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial drop in read/write perfomance env: - 3 node, replica 3 cluster - Private dedicated Network: 1Gx3, bond: balance-alb - was able to down the volume for the upgrade and reboot each node - Usage: VM Hosting (qemu) - Sharded Volume - sequential read performance in VM's has dropped from 700Mbps to 300mbs - Seq Write
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this, I cant find anything so far telling me how to fix it. Looking at http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy, The heal command is basically used to heal any mismatching contents between replica copies of the files. For the command "gluster volume heal <volname>" to succeed, you should have the self-heal-daemon running, which is true only if your volume is of type replicate/disperse. In your case you have a plain distribute volume where you do not store the replica of any
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2013 Nov 29
1
Self heal problem
Hi, I have a glusterfs volume replicated on three nodes. I am planing to use the volume as storage for vMware ESXi machines using NFS. The reason for using tree nodes is to be able to configure Quorum and avoid split-brains. However, during my initial testing when intentionally and gracefully restart the node "ned", a split-brain/self-heal error occurred. The log on "todd"
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following: > > gluster volume remove-brick datastore4 replica 2 > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > gluster volume add-brick datastore4 replica 3 > vnd.proxmox.softlog:/tank/vmdata/datastore4 I think that should work perfectly fine yes, either that or directly use replace-brick ? -------------- next part -------------- A non-text
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote: > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2018 May 22
2
split brain? but where?
Hi, Which version of gluster you are using? You can find which file is that using the following command find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> Please provide the getfatr output of the file which is in split brain. The steps to recover from split-brain can be found here,
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup: Distributed volume without replication. Sharding enabled. [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925 Status: Started Snapshot Count: 0 Number of Bricks: 27 Transport-type: tcp Bricks: Brick1: