search for: gv_openstack_1

Displaying 19 results from an estimated 19 matches for "gv_openstack_1".

2017 Sep 08
2
GlusterFS as virtual machine storage
...mends against changing network ping timeout. We discussed this via IRC recently. Diego On Sep 8, 2017 5:56 AM, "Pavel Szalbot" <pavel.szalbot at gmail.com> wrote: This is the qemu log of instance: [2017-09-08 09:31:48.381077] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded in the last 1 seconds, disconnecting. [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d0...
2017 Sep 08
0
GlusterFS as virtual machine storage
...e discussed this via IRC recently. > > Diego > > On Sep 8, 2017 5:56 AM, "Pavel Szalbot" <pavel.szalbot at gmail.com> wrote: > > This is the qemu log of instance: > > [2017-09-08 09:31:48.381077] C > [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] > 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded > in the last 1 seconds, disconnecting. > [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] > (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] > (--> /lib64/libgfrpc.so.0(saved_frames_unwin...
2017 Sep 08
1
GlusterFS as virtual machine storage
...e discussed this via IRC recently. > > Diego > > On Sep 8, 2017 5:56 AM, "Pavel Szalbot" <pavel.szalbot at gmail.com> wrote: > > This is the qemu log of instance: > > [2017-09-08 09:31:48.381077] C > [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] > 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded > in the last 1 seconds, disconnecting. > [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] > (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] > (--> /lib64/libgfrpc.so.0(saved_frames_unwin...
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance: [2017-09-08 09:31:48.381077] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded in the last 1 seconds, disconnecting. [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d0...
2017 Sep 08
1
GlusterFS as virtual machine storage
...e discussed this via IRC recently. > > Diego > > On Sep 8, 2017 5:56 AM, "Pavel Szalbot" <pavel.szalbot at gmail.com> wrote: > > This is the qemu log of instance: > > [2017-09-08 09:31:48.381077] C > [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] > 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded > in the last 1 seconds, disconnecting. > [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] > (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] > (--> /lib64/libgfrpc.so.0(saved_frames_unwin...
2017 Sep 08
3
GlusterFS as virtual machine storage
...e discussed this via IRC recently. > > Diego > > On Sep 8, 2017 5:56 AM, "Pavel Szalbot" <pavel.szalbot at gmail.com> wrote: > > This is the qemu log of instance: > > [2017-09-08 09:31:48.381077] C > [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] > 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded > in the last 1 seconds, disconnecting. > [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] > (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] > (--> /lib64/libgfrpc.so.0(saved_frames_unwin...
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote: > Back to replica 3 w/o arbiter. Two fio jobs
2017 Sep 09
0
GlusterFS as virtual machine storage
...our? I switched to FUSE now and the VM crashed (read-only remount) immediately after one node started rebooting. I tried to mount.glusterfs same volume on different server (not VM), running Ubuntu Xenial and gluster client 3.10.5. mount -t glusterfs -o backupvolfile-server=10.0.1.202 10.0.1.201:/gv_openstack_1 /mnt/gv_openstack_1/ I ran fio job I described earlier. As soon as I killall glusterfsd, fio reported: fio: io_u error on file /mnt/gv_openstack_1/fio.data: Transport endpoint is not connected: read offset=7022575616, buflen=262144 fio: pid=7205, err=107/file:io_u.c:1582, func=io_u error, error=T...
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel. Is there a difference between native client (fuse) and libgfapi in regards to the crashing/read-only behaviour? We use Rep2 + Arb and can shutdown a node cleanly, without issue on our VMs. We do it all the time for upgrades and maintenance. However we are still on native client as we haven't had time to work on libgfapi yet. Maybe that is more tolerant. We have linux VMs mostly
2017 Sep 06
2
GlusterFS as virtual machine storage
...esting and I finally find some time and > infrastructure. > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) > with disk accessible through gfapi. Volume group is set to virt > (gluster volume set gv_openstack_1 virt). VM runs current (all > packages updated) Ubuntu Xenial. > > I set up following fio job: > > [job1] > ioengine=libaio > size=1g > loops=16 > bs=512k > direct=1 > filename=/tmp/fio.data2 > > When I run fio fio.job and reboot one of the data nodes, IO stat...
2017 Sep 06
0
GlusterFS as virtual machine storage
...I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up following fio job: [job1] ioengine=libaio size=1g loops=16 bs=512k direct=1 filename=/tmp/fio.data2 When I run fio fio.job and reboot one of the data nodes, IO statistics reported by fio drop to 0KB/0KB and 0 IOPS. After a whi...
2017 Sep 07
3
GlusterFS as virtual machine storage
...structure. > > > > > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > > > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) > > > with disk accessible through gfapi. Volume group is set to virt > > > (gluster volume set gv_openstack_1 virt). VM runs current (all > > > packages updated) Ubuntu Xenial. > > > > > > I set up following fio job: > > > > > > [job1] > > > ioengine=libaio > > > size=1g > > > loops=16 > > > bs=512k > > > direct=1 &gt...
2017 Sep 06
0
GlusterFS as virtual machine storage
...time and > > infrastructure. > > > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) > > with disk accessible through gfapi. Volume group is set to virt > > (gluster volume set gv_openstack_1 virt). VM runs current (all > > packages updated) Ubuntu Xenial. > > > > I set up following fio job: > > > > [job1] > > ioengine=libaio > > size=1g > > loops=16 > > bs=512k > > direct=1 > > filename=/tmp/fio.data2 > > > >...
2017 Sep 03
3
GlusterFS as virtual machine storage
Il 30-08-2017 17:07 Ivan Rossi ha scritto: > There has ben a bug associated to sharding that led to VM corruption > that has been around for a long time (difficult to reproduce I > understood). I have not seen reports on that for some time after the > last fix, so hopefully now VM hosting is stable. Mmmm... this is precisely the kind of bug that scares me... data corruption :| Any
2017 Sep 07
0
GlusterFS as virtual machine storage
...> > >> > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created >> > > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) >> > > with disk accessible through gfapi. Volume group is set to virt >> > > (gluster volume set gv_openstack_1 virt). VM runs current (all >> > > packages updated) Ubuntu Xenial. >> > > >> > > I set up following fio job: >> > > >> > > [job1] >> > > ioengine=libaio >> > > size=1g >> > > loops=16 >> > >...
2017 Sep 09
2
GlusterFS as virtual machine storage
...the VM crashed (read-only remount) > immediately after one node started rebooting. > > I tried to mount.glusterfs same volume on different server (not VM), > running Ubuntu Xenial and gluster client 3.10.5. > > mount -t glusterfs -o backupvolfile-server=10.0.1.202 > 10.0.1.201:/gv_openstack_1 /mnt/gv_openstack_1/ > > I ran fio job I described earlier. As soon as I killall glusterfsd, > fio reported: > > fio: io_u error on file /mnt/gv_openstack_1/fio.data: Transport > endpoint is not connected: read offset=7022575616, buflen=262144 > fio: pid=7205, err=107/file:io_u...
2017 Sep 07
2
GlusterFS as virtual machine storage
...t;> > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created >>> > > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) >>> > > with disk accessible through gfapi. Volume group is set to virt >>> > > (gluster volume set gv_openstack_1 virt). VM runs current (all >>> > > packages updated) Ubuntu Xenial. >>> > > >>> > > I set up following fio job: >>> > > >>> > > [job1] >>> > > ioengine=libaio >>> > > size=1g >>> >...
2017 Sep 10
1
GlusterFS as virtual machine storage
On Fri, Sep 8, 2017 at 5:56 AM, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > This is the qemu log of instance: > > [2017-09-08 09:31:48.381077] C > [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] > 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded > in the last 1 seconds, disconnecting. > > 1 second is not an ideal value for ping timeout. Can you please set it to 30 seconds or so and simulate the problem? I would be interested in observing your logs with a higher ping timeout valu...
2017 Sep 08
0
GlusterFS as virtual machine storage
...> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created >>>> > > replicated volume with arbiter (2+1) and VM on KVM (via Openstack) >>>> > > with disk accessible through gfapi. Volume group is set to virt >>>> > > (gluster volume set gv_openstack_1 virt). VM runs current (all >>>> > > packages updated) Ubuntu Xenial. >>>> > > >>>> > > I set up following fio job: >>>> > > >>>> > > [job1] >>>> > > ioengine=libaio >>>> > &gt...