Mitja Mihelič
2015-Jun-01 11:11 UTC
[Gluster-users] Client load high (300) using fuse mount
Hi! I am trying to set up a Wordpress cluster using GlusterFS used for storage. Web nodes will access the same Wordpress install on a volume mounted via FUSE from a 3 peer GlusterFS TSP. I started with one web node and Wordpress on local storage. The load average was constantly about 5. iotop showed about 300kB/s disk reads or less. The load average was below 6. When I mounted the GlusterFS volume to the web node the 1min load average went over 300. Each of the 3 peers is transmitting about 10MB/s to my web node regardless of the load. TSP peers are on 10Gbit NICs and the web node is on a 1Gbit NIC. I'm out of ideas here... Could it be the network? What should I look at for optimizing the network stack on the client? Options set on TSP: Options Reconfigured: performance.cache-size: 4GB network.ping-timeout: 15 cluster.quorum-type: auto network.remote-dio: on cluster.eager-lock: on performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.cache-refresh-timeout: 4 performance.io-thread-count: 32 nfs.disable: on Regards, Mitja -- -- Mitja Miheli? ARNES, Tehnolo?ki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia tel: +386 1 479 8877, fax: +386 1 479 88 78
Pranith Kumar Karampuri
2015-Jun-02 05:33 UTC
[Gluster-users] Client load high (300) using fuse mount
hi Mitja,
Could you please give output of the following commands:
1) gluster volume info
2) gluster volume profile <volname> start
3) Wait while the CPU is high for 5-10 minutes
4) gluster volume profile <volname> info >
output-you-need-to-attach-to-this-mail.txt
4th command tells what are the operations that are issued a lot.
Pranith
On 06/01/2015 04:41 PM, Mitja Miheli? wrote:> Hi!
>
> I am trying to set up a Wordpress cluster using GlusterFS used for
> storage. Web nodes will access the same Wordpress install on a volume
> mounted via FUSE from a 3 peer GlusterFS TSP.
>
> I started with one web node and Wordpress on local storage. The load
> average was constantly about 5. iotop showed about 300kB/s disk reads
> or less. The load average was below 6.
>
> When I mounted the GlusterFS volume to the web node the 1min load
> average went over 300. Each of the 3 peers is transmitting about
> 10MB/s to my web node regardless of the load.
> TSP peers are on 10Gbit NICs and the web node is on a 1Gbit NIC.
>
> I'm out of ideas here... Could it be the network?
> What should I look at for optimizing the network stack on the client?
>
> Options set on TSP:
> Options Reconfigured:
> performance.cache-size: 4GB
> network.ping-timeout: 15
> cluster.quorum-type: auto
> network.remote-dio: on
> cluster.eager-lock: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.cache-refresh-timeout: 4
> performance.io-thread-count: 32
> nfs.disable: on
>
> Regards, Mitja
>