Hi,
I've been playing with GlusterFS to test it and I'm quite happy with it
for
now except few issues I'll describe here.
I'm running a 3 replica system at work (Ubuntu 14.04 with latest GlusterFS
release from PPA) to make sure some data can not vanish in case of hardware
failure. They are mounted via FUSE and so far, this is working as expected.
Here are some ideas/thoughts as well as issues :
1/ The concept of "transport.socket.bind-address" is not working
properly.
I mean, the way it sould work is as bi-directional! If you have a multiple
IP addresses on a server pertaining to the same network (ie: 192.168.0.1
and 192.168.0.2) to separate services, you'll endup with a problem : If you
"bind-address" to "xxx.2" for "glusterd", the
second server (ie: having
192.168.0.100 and 192.168.0.101 as IP addresses) will receive requests at
192.168.0.101 (glusterd bind-address) from... 192.168.0.1! "gluster"
should
honor this bind-adress option or at least provide a way to use one!
I tried to fiddle the /var/lib/glusterd/peers/<uuid> files but of course
it
created connection issues as the source IP receiving requests wasn't the
one expected. I understand that and it's perfectly normal.
So, we definitely need a way, when using "bind-address" to honor it
everywhere, be it the listen bind address but also the source IP to use to
connect to the peers.
2/ The documentation is scarce and lots of information are really hard to
find. I'm new to this project but I spent countless time trying to figure
out what options/directives I could use in /etc/glusterfs/glusterd.vol
without much success! There's no place to list every available option and
in what context and/or configuration file it can be used! I found about
"bind-address" by luck searching for other things :-) Something as
complex
as GlusterFS should have at least full documentation. I had to look at the
source code to see what options could be used by "glusterfs" like the
--remote-host mentioned in various places but properly documented nowhere
:-(
3/ There should be a way to define "default properties" (like a
profile)
for each new created volume. For each volume I created during my tests, I
had to manually remove NFS support as well as set/modify other options.
This is painful if you create more than one volume. I use pure NFSv4 on my
network and I don't need legacy NFS implementations fired up by default
without knowing, messing up with "rpcbind" and leading to the utterly
annoying "socket failed" bug -
https://bugzilla.redhat.com/show_bug.cgi?id=1199936 - spamming the logs
when disabling NFS support. The Unix philosophy, unlike the Windows world,
is to keep the opened services to a minimum and let the admin decide which
one he/she desires to use.
4/ The logging system is overwhelming! Any admin wants/needs logs to figure
out what's wrong but GlusterFS is just too much! I logs everything,
producing a huge load of information which actually is way counter
productive as it totally drowns useful information under a lot of
not-so-useful information. Besides, "diagnostics.client-log-level" and
"diagnostics.brick-log-level", I found no other way to reduce a bit
the
logs.
The worst of all being "gluster" itself ! Just launch it, get the
prompt
"gluster> " and do nothing. Now have a look to
/var/log/glusterfs/cli.log
and be frightened by a trace-level log filling up at a 3~8 entries per
second!
That's it for now. All of this sounds like big criticism but in fact, I
mention them so they could be addressed because as far as functionality
goes, my first attempt with GlusterFS is quite nice :-)
--
Unix _IS_ user friendly, it's just selective about who its friends are.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150316/c9e533ac/attachment.html>