search for: 46.372632

Displaying 7 results from an estimated 7 matches for "46.372632".

Did you mean: 46.372438
2017 Sep 08
3
GlusterFS as virtual machine storage
I think this should be considered a bug If you have a server crash, glusterfsd process obviously doesn't exit properly and thus this could least to IO stop ? And server crashes are the main reason to use a redundant filesystem like gluster Il 8 set 2017 12:43 PM, "Diego Remolina" <dijuremo at gmail.com> ha scritto: This is exactly the problem, Systemctl stop glusterd does
2017 Sep 08
1
GlusterFS as virtual machine storage
If your VMs use ext4 also check this: https://joejulian.name/blog/keeping-your-vms-from-going- read-only-when-encountering-a-ping-timeout-in-glusterfs/ I asked him what to do for VMs using XFS and he said he could not find a fix (setting to change) for those. HTH, Diego On Sep 8, 2017 6:19 AM, "Diego Remolina" <dijuremo at gmail.com> wrote: > The issue of I/O stopping may
2017 Sep 08
2
GlusterFS as virtual machine storage
The issue of I/O stopping may also be with glusterfsd not being properly killed before rebooting the server. For example in RHEL 7.4 with official Gluster 3.8.4, the glusterd service does *not* stop glusterfsd when you run systemctl stop glusterd So give this a try on the nose you wish to reboot: 1. Stop glusterd 2. Check if glusterfsd processes are still running. If they are, use: killall
2017 Sep 08
0
GlusterFS as virtual machine storage
Hi Diego, indeed glusterfsd processes are runnin and it is the reason I do server reboot instead of systemctl glusterd stop. Is killall different from reboot in a way glusterfsd processes are terminated in CentOS (init 1?)? However I will try this and let you know. -ps On Fri, Sep 8, 2017 at 12:19 PM, Diego Remolina <dijuremo at gmail.com> wrote: > The issue of I/O stopping may also
2017 Sep 08
1
GlusterFS as virtual machine storage
This is exactly the problem, Systemctl stop glusterd does *not* kill the brick processes. On CentOS with gluster 3.10.x there is also a service, meant to only stop glusterfsd (brick processes). I think the reboot process may not be properly stopping glusterfsd or network or firewall may be stopped before glusterfsd and so the nodes go into the long timeout. Once again , in my case a simple
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance: [2017-09-08 09:31:48.381077] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded in the last 1 seconds, disconnecting. [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] (-->
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote: > Back to replica 3 w/o arbiter. Two fio jobs