I have 10 GlusterServer, each server 1.5TB, now I add more one GlusterServer
But how to redistribute data, to make data spread the same on all server?
To do this work, I intend limit disk size of 10 first GluterFS, so, new
data will only fill on 11s Server, but I error:
/syntax error: line 8 (volume 'quota'): ";"
allowed tokens are 'volume', 'type', 'subvolumes',
'option',
'end-volume'error in parsing volume file /etc/glfs/userdata0.vol
exiting
/
This my config of GLFS Server
-----------------------------------
volume posix
type storage/posix
option directory /mnt/sdb
end-volume
volume quota
type features/quota
option min-free-disk-limit 50 ; percent of filesystem usage limit
# option refresh-interval 20s ; 20s is the default
# option disk-usage-limit 500GB
subvolumes posix
end-volume
volume locks
type features/locks
subvolumes quota
end-volume
volume brick
type performance/io-threads
option thread-count 32
option cache-size 1024MB
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option listen-port 20000
option auth.addr.brick.allow *
subvolumes brick
end-volume
-----------------------------------
Hi ! I have this Xen over Gluster deployment 3 Xen servers, 2 Gluster servers (2 bricks each one) replicated, 1 Gbit dedicated network between all of them, about 200 Gbytes for 22 Xen virtual machines. All was fine until this action: - Shutdown 2nd Gluster Server - Move it (to another rack) - Restart the server At this point the auto-healing option activates and all 22 virtual machines get blocked ! The process iowait had 100% of processor on 2dn gluster server. At this moment I left just the 1st gluster server (2nd is disconnected from network). All servers have debian lenny with gluster 3.05. Questions: 1- Is there a way to run de healing in background ? 2- Is there something wrong in my configs ? (attached at bottom) Thanx in advance ! =============== Server config =================== # # Se define el primer brick # volume posix1 type storage/posix option directory /disco1/01 option background-unlink yes # Se aconseja cuando el filesystem contiene archivos de varios GB end-volume volume locks1 type features/posix-locks option mandatory-locks on subvolumes posix1 end-volume volume brick1 type performance/io-threads option thread-count 8 # Default es 16 subvolumes locks1 end-volume # # Se define el segundo brick # volume posix2 type storage/posix option directory /disco1/02 option background-unlink yes # Se aconseja cuando el filesystem contiene archivos de varios GB end-volume volume locks2 type features/posix-locks option mandatory-locks on subvolumes posix2 end-volume volume brick2 type performance/io-threads option thread-count 8 # Default es 16 subvolumes locks2 end-volume # # El servidor ofreciendo ambos bricks # volume server type protocol/server option transport-type tcp option transport.socket.bind-address 10.253.2.8 option transport.socket.listen-port 7000 option auth.addr.brick1.allow * option auth.addr.brick2.allow * subvolumes brick1 brick2 end-volume ================ Client config ================= # # Se define primer brick en virgen # volume client1 type protocol/client option transport-type tcp option remote-host 10.253.2.9 option remote-port 7000 option remote-subvolume brick1 end-volume # # Se define segundo brick en virgen # volume client2 type protocol/client option transport-type tcp option remote-host 10.253.2.9 option remote-port 7000 option remote-subvolume brick2 end-volume # # Se define primer brick en mate # volume client3 type protocol/client option transport-type tcp option remote-host 10.253.2.8 option remote-port 7000 option remote-subvolume brick1 end-volume # # Se define segundo brick en mate # volume client4 type protocol/client option transport-type tcp option remote-host 10.253.2.8 option remote-port 7000 option remote-subvolume brick2 end-volume # # Se espejan los primeros brick de virgen y mate # volume server1 type cluster/replicate subvolumes client1 client3 end-volume # # Se espejan los segundos brick de virgen y mate # volume server2 type cluster/replicate subvolumes client2 client4 end-volume # # Se suman las replicaciones en un solo volumen completo # volume completo type cluster/distribute option min-free-disk 20% option lookup-unhashed yes subvolumes server1 server2 end-volume # # Se agregan opciones de performance # volume writebehind type performance/write-behind option cache-size 4MB subvolumes completo end-volume volume iocache type performance/io-cache option cache-size 64MB subvolumes writebehind end-volume -- Martin Eduardo Bradaschia Intercomgi Argentina