similar to: physical servers vs virtual servers for glusterfs

Displaying 20 results from an estimated 30000 matches similar to: "physical servers vs virtual servers for glusterfs"

2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting yesterday and now killed another one during FUSE test, so it had to crash immediately (only one of three nodes were actually up). This definitely happened for the first time (only one node had been killed yesterday). Using FUSE seems to be OK with replica 3. So this can be gfapi related or maybe rather libvirt related. I tried
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi, On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: > Pavel. > > Is there a difference between native client (fuse) and libgfapi in regards > to the crashing/read-only behaviour? I switched to FUSE now and the VM crashed (read-only remount) immediately after one node started rebooting. I tried to mount.glusterfs same volume on different server (not VM), running
2017 Jun 01
0
Who's using OpenStack Cinder & Gluster? [ Was Re: [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder]
Joe, Agree with you on turning this around into something more positive. One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder & Gluster, can you please respond on this thread or to me & Niels in private
2013 Oct 17
0
Gluster Community Congratulates OpenStack Developers on Havana Release
The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community.
2017 Jun 20
1
Cloud storage with glusterfs
Hello everybody I have 3 datacenters in different regions, Can I deploy my own cloud storage with the help of glusterfs on the physical nodes?If I can, what are the differences between cloud storage glusterfs and local gluster storage? thx for your attention :) -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Aug 24
1
GlusterFS as virtual machine storage
On 8/23/2017 10:44 PM, Pavel Szalbot wrote: > Hi, > > On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: >> The default timeout for most OS versions is 30 seconds and the Gluster >> timeout is 42, so yes you can trigger an RO event. > I get read-only mount within approximately 2 seconds after failed IO. Hmm, we don't see that, even on busy VMs. We
2017 Aug 24
0
GlusterFS as virtual machine storage
Hi, On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: > The default timeout for most OS versions is 30 seconds and the Gluster > timeout is 42, so yes you can trigger an RO event. I get read-only mount within approximately 2 seconds after failed IO. > Though it is easy enough to raise as Pavel mentioned > > # echo 90 > /sys/block/sda/device/timeout AFAIK
2017 Oct 14
1
nic requirement for teiring glusterfs
Hi everybody, I have a question about network interface used for tiering in glusterfs, if I have a 1G nic on glusterfs servers and clients, can I get more performance by setting up glusterfs tiering?or the network interface should be 10G? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Aug 30
0
Unable to use Heketi setup to install Gluster for Kubernetes
Hi, I have the following setup in place: 1 node : RancherOS having Rancher application for Kubernetes setup 2 nodes : RancherOS having Rancher agent 1 node : CentOS 7 workstation having kubectl installed and folder cloned/downloaded from https://github.com/gluster/gluster-kubernetes using which i run Heketi setup (gk-deploy -g) I also have rancher-glusterfs-server container running with
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2017 Aug 30
0
Manually delete .glusterfs/changelogs directory ?
Hi, has anyone any advice to give about my question below? Thanks! > -------- Original Message -------- > Subject: Manually delete .glusterfs/changelogs directory ? > Local Time: August 16, 2017 5:59 PM > UTC Time: August 16, 2017 3:59 PM > From: mabi at protonmail.ch > To: Gluster Users <gluster-users at gluster.org> > > Hello, > > I just deleted (permanently)
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2017 Aug 30
0
[Gluster-devel] High load on glusterfs!!
Do we have ACL support on nfs-ganesha? On Aug 30, 2017 3:08 PM, "Niels de Vos" <ndevos at redhat.com> wrote: > On Wed, Aug 30, 2017 at 01:52:59PM +0530, ABHISHEK PALIWAL wrote: > > What is Gluster/NFS and how can we use this. > > Gluster/NFS (or gNFS) is the NFS-server that comes with GlusterFS. It is > a NFSv3 server and can only be used to export Gluster
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set cluster.server-quorum-ratio 51% On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi all, > > I have promised to do some testing and I finally find some time and > infrastructure. > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > replicated volume with arbiter (2+1) and VM on KVM (via
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and refusing to do IO. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote: > *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Aug 23
0
GlusterFS as virtual machine storage
Hi, after many VM crashes during upgrades of Gluster, losing network connectivity on one node etc. I would advise running replica 2 with arbiter. I once even managed to break this setup (with arbiter) due to network partitioning - one data node never healed and I had to restore from backups (it was easier and kind of non-production). Be extremely careful and plan for failure. -ps On Mon, Aug
2017 Jul 07
0
Community Meeting 2017-07-05 Minutes
Hi all, The meeting minutes and logs for the community meeting held on Wednesday are available at the links below. [1][2][3][4] We had a good showing this meeting. Thank you everyone who attended this meeting. Our next meeting will be on 19th July. Everyone is welcome to attend. The meeting note pad is available at [5] to add your topics for discussion. Thanks, Kaushal [1]: Minutes:
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder to achieve than with just replica 2 + arbiter. On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi Neil, docs mention two live nodes of replica 3 blaming each other and > refusing to do IO. > > https://gluster.readthedocs.io/en/latest/Administrator% >
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel. Is there a difference between native client (fuse) and libgfapi in regards to the crashing/read-only behaviour? We use Rep2 + Arb and can shutdown a node cleanly, without issue on our VMs. We do it all the time for upgrades and maintenance. However we are still on native client as we haven't had time to work on libgfapi yet. Maybe that is more tolerant. We have linux VMs mostly