Hi Dave,
* Do you run into same problems without performance translators
(write-behind, io-cache)? If you don't face any problem after removing
performance translators, Is it possible to pin point which among
write-behind and io-cache is causing the problem (Again by retaining only
one of write-behind/io-cache)? Also is it possible to find out whether the
client or server process that is using too much of cpu ("top" may help
here)?
regards,
On Tue, Jan 5, 2010 at 5:43 PM, Dave Hall <dave.hall at skwashd.com>
wrote:
> Hi all,
>
> I have been fighting with Gluster on and off over the last month or so.
> I am ready to wave the white flag!
>
> Environment:
>
> * Host: 64bit KVM virtual machines 4 virtual CPUs and 4G RAM
> - Phyiscal Hosts are Dell R900 4 Dual Core Xeons & 32G RAM
>
> * Distro: Ubuntu 9.10 (amd64)
>
> * Gluster Version: 2.0.9 self compiled debs
>
> * Storage: ~250G (small fries)
>
> I initially tried using the stock Ubuntu 2.0.2 debs, but they had the
> same problem as described below.
>
> Each of the 3 client nodes are web heads. The gluster server is also
> running on each of the nodes, mounting an ext4 fs and exporting it. Our
> plan was to load balance across the 3 nodes using ha-proxy, with
> memcached sessions and have uploads and config changes replicated across
> the nodes using GlusterFS.
>
> Everything looks great on paper - except when one of the nodes decided
> to max a CPU. The trigger for the node freaking out seems to be quite
> random. Some times it is when running "find /path/to/gluster-mount
> -type f" other times it can be when the server is idling. I think the
> largest file we have on the file system is a few hundred Kb.
>
> I am not sure what information you need, but here the config files,
> which I hope will help.
>
> $ cat /etc/glusterfs/glusterfsd.vol
> volume posix
> type storage/posix
> option directory /srv/glusterfs/export
> end-volume
>
> volume locks
> type features/locks
> subvolumes posix
> end-volume
>
> volume brick
> type performance/io-threads
> option thread-count 4
> subvolumes locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option transport.socket.bind-address 192.168.XXX.123
> option auth.addr.brick.allow
> 192.168.XXX.122,192.168.XXX.123,192.168.XXX.124
> option auth.addr.locks.allow 192.168.XXX.*
> option auth.addr.posix.allow 192.168.XXX.*
> subvolumes brick
> end-volume
>
>
> $ cat /etc/glusterfs/glusterfs.vol
> # Note the 3 node is missing because I removed it earlier today
> volume my-storage-c
> type protocol/client
> option transport-type tcp
> option remote-host 192.168.201.123
> option remote-subvolume brick
> end-volume
>
> volume my-storage-d
> type protocol/client
> option transport-type tcp
> option remote-host 192.168.201.124
> option remote-subvolume brick
> end-volume
>
> volume replication
> type cluster/replicate
> subvolumes my-storage-c my-storage-d
> option read-subvolume my-storage-c
> end-volume
>
> volume writebehind
> type performance/write-behind
> option window-size 1MB
> subvolumes replication
> end-volume
>
> volume cache
> type performance/io-cache
> option cache-size 256MB
> subvolumes writebehind
> end-volume
>
> selective content of /etc/fstab
>
> /dev/mapper/my--ui--c-glusterfs--export /srv/glusterfs/export ext4
> noatime,nodev,nosuid 0 2
> /etc/glusterfs/glusterfs.vol /path/to/mount glusterfs defaults 0 0
>
> If you need more information let me know and I'll try to supply it. I
> welcome any suggestions/assistance.
>
> Cheers
>
>
> Dave
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
--
Raghavendra G