On 28 August 2012 11:14, John Doe <jdmls at yahoo.com>
wrote:> Hey,
>
> since RH took control of glusterfs, I've been looking to convert our
old independent RAID storage servers to several non RAID glustered ones.
>
> The thing is that I, here and there, heard a few frightening stories from
some users (even with latest release).
> Any one has experienced with it long enough to think one can blindly trust
it or if it is almost there but not yet ready
I can't say anything about the RH Storage Appliance, but for us,
gluster up to 3.2.x was most definitely not ready. We went through a
lot of pain, and even after optimizing OS config with help of gluster
support, we were facing insurmountable problems. One of them was
kswapd instances going into overdrive, and once the machine reached a
certain load, all networking functions just stopped. I'm not saying
this is gluster's fault, but even with support we were unable to
configure the machines so that this doesn't happen. That was on CentOS
5.6/x86_64.
Another problem was that due to load and frequent updates (each new
version was supposed to fix bugs; some weren't fixed, and there were
plenty of new ones) the filesystems became inconsistent. In theory,
each file lives on a single brick. The reality was that in the end,
there were many files that existed on all bricks, one copy fully
intact, the others with zero size and funny permissions. You can guess
what happens if you're not aware of this and try to copy/rsync data
off all bricks to different storage. IIRC there were internal changes
that required going through a certain procedure during some upgrades
to ensure filesystem consistency, and these procedures were followed.
We only started out with 3.0.x, and my impression was that development
was focusing on new features rather than bug fixes.