Displaying 20 results from an estimated 10000 matches similar to: "Slow performance of gluster volume"
2017 Sep 10
2
Slow performance of gluster volume
Great to hear!
----- Original Message -----
> From: "Abi Askushi" <rightkicktech at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>
> Cc: "gluster-user" <gluster-users at gluster.org>
> Sent: Friday, September 8, 2017 7:01:00 PM
> Subject: Re: [Gluster-users] Slow performance of gluster volume
>
> Following
2017 Sep 06
2
Slow performance of gluster volume
I tried to follow step from
https://wiki.centos.org/SpecialInterestGroup/Storage to install latest
gluster on the first node.
It installed 3.10 and not 3.11. I am not sure how to install 3.11 without
compiling it.
Then when tried to start the gluster on the node the bricks were reported
down (the other 2 nodes have still 3.8). No sure why. The logs were showing
the below (even after rebooting the
2017 Sep 11
2
Slow performance of gluster volume
----- Original Message -----
> From: "Abi Askushi" <rightkicktech at gmail.com>
> To: "Ben Turner" <bturner at redhat.com>
> Cc: "Krutika Dhananjay" <kdhananj at redhat.com>, "gluster-user" <gluster-users at gluster.org>
> Sent: Monday, September 11, 2017 1:40:42 AM
> Subject: Re: [Gluster-users] Slow performance of
2017 Sep 11
0
Slow performance of gluster volume
Did not upgrade yet gluster. I am still using 3.8.12. Only the mentioned
changes did provide the performance boost.
>From which version to which version did you see such performance boost? I
will try to upgrade and check difference also.
On Sep 11, 2017 2:45 AM, "Ben Turner" <bturner at redhat.com> wrote:
Great to hear!
----- Original Message -----
> From: "Abi
2017 Sep 08
0
Slow performance of gluster volume
Following changes resolved the perf issue:
Added the option
/etc/glusterfs/glusterd.vol :
option rpc-auth-allow-insecure on
restarted glusterd
Then set the volume option:
gluster volume set vms server.allow-insecure on
I am reaching now the max network bandwidth and performance of VMs is quite
good.
Did not upgrade the glusterd.
As a next try I am thinking to upgrade gluster to 3.12 + test
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves
perf for this kind of a workload
Also, could you disable eager-lock and check if that helps? I see that max
time is being spent in acquiring locks.
-Krutika
On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote:
> Hi Krutika,
>
> Is it anything in the profile indicating what is
2017 Sep 11
0
Slow performance of gluster volume
Hi Abi
Can you please share your current transfer speeds after you made the change?
Thank you.
On Mon, Sep 11, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote:
> ----- Original Message -----
> > From: "Abi Askushi" <rightkicktech at gmail.com>
> > To: "Ben Turner" <bturner at redhat.com>
> > Cc: "Krutika Dhananjay"
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika,
I already have a preallocated disk on VM.
Now I am checking performance with dd on the hypervisors which have the
gluster volume configured.
I tried also several values of shard-block-size and I keep getting the same
low values on write performance.
Enabling client-io-threads also did not have any affect.
The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with
and without shard will be the same.
In any case, please attach the volume profile[1], so we can see what else
is slowing things down.
-Krutika
[1] -
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the
2017 Sep 04
2
Slow performance of gluster volume
Hi all,
I have a gluster volume used to host several VMs (managed through oVirt).
The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network
for the storage.
When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1
oflag=direct) out of the volume (e.g. writing at /root/) the performance of
the dd is reported to be ~ 700MB/s, which is quite decent. When testing the
dd on
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability
:-) I've found that with
performance.write-behind off
the vm works without problem. Now I'll try with write-behind on and
flush-behind on too.
Il 18/01/2018 13:30, Krutika Dhananjay ha scritto:
> Thanks for that input. Adding Niels since the issue is reproducible
> only with libgfapi.
>
>
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only
with libgfapi.
-Krutika
On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Another update.
>
> I've setup a replica 3 volume without sharding and tried to install a VM
> on a qcow2 volume on that device; however the result is the same and the vm
>
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update.
I've setup a replica 3 volume without sharding and tried to install a VM
on a qcow2 volume on that device; however the result is the same and the
vm image has been corrupted, exactly at the same point.
Here's the volume info of the create volume:
Volume Name: gvtest
Type: Replicate
Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
Status: Started
Snapshot Count: 0
Number
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Hi,
after our IRC chat I've rebuilt a virtual machine with FUSE based
virtual disk. Everything worked flawlessly.
Now I'm sending you the output of the requested getfattr command on the
disk image:
# file: TestFUSE-vda.qcow2
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x40ffafbbe987445692bb31295fa40105
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I actually use FUSE and it works. If i try to use "libgfapi" direct
interface to gluster in qemu-kvm, the problem appears.
Il 17/01/2018 11:35, Krutika Dhananjay ha scritto:
> Really? Then which protocol exactly do you see this issue with?
> libgfapi? NFS?
>
> -Krutika
>
> On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi
> Srl <luca at
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Really? Then which protocol exactly do you see this issue with? libgfapi?
NFS?
-Krutika
On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Of course. Here's the full log. Please, note that in FUSE mode everything
> works apparently without problems. I've installed 4 vm and updated them
> without problems.
>
>
>
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Of course. Here's the full log. Please, note that in FUSE mode
everything works apparently without problems. I've installed 4 vm and
updated them without problems.
Il 17/01/2018 11:00, Krutika Dhananjay ha scritto:
>
>
> On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi
> Srl <luca at gvnet.it <mailto:luca at gvnet.it>> wrote:
>
>
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've just done all the steps to reproduce the problem.
Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated metadata, which moves the
problem a bit far away (in time). The volume is a replice 3 arbiter 1
volume hosted on XFS bricks.
Here are the
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> I've made the test with raw image format (preallocated too) and the
> corruption problem is still there (but without errors in bricks' log file).
>
> What does the "link" error in bricks log files means ?
>
> I've seen the source code looking for the