Has anyone else noticed a memory leak when using the Quickread translator? I
created a backup server that rsync's gluster mounts to a Coraid SAN device.
Originally I simply copied the vol files from my production workstations to
this server. However I quickly found out that when rsync was running the
memory usage of the glusterfs processed was climbing out of control. The
server would exhaust the ram, swap everything out, and then glusterfs would
get killed by the oom condition. Commenting out the Quickread translator:
#volume quickread
# type performance/quick-read
# option cache-timeout 1
# option max-file-size 64kB
# subvolumes iocache
#end-volume
and skipping it for the mount fixed the problem.
My workstations are having a problem as well. After running for a few days (as
long as a week) the users start having their sessions killed. They are
returned to a login prompt, and can login again. Glusterfs is still running at
this point, but I think thats because all the users apps were first on the kill
list for an oom condition. The backup server runs nothing but glusterfs and
rsync.
--
Benjamin Long