similar to: GlusterFS as virtual machine storage

Displaying 20 results from an estimated 700 matches similar to: "GlusterFS as virtual machine storage"

2017 Sep 08
1
GlusterFS as virtual machine storage
If your VMs use ext4 also check this: https://joejulian.name/blog/keeping-your-vms-from-going- read-only-when-encountering-a-ping-timeout-in-glusterfs/ I asked him what to do for VMs using XFS and he said he could not find a fix (setting to change) for those. HTH, Diego On Sep 8, 2017 6:19 AM, "Diego Remolina" <dijuremo at gmail.com> wrote: > The issue of I/O stopping may
2017 Sep 08
1
GlusterFS as virtual machine storage
This is exactly the problem, Systemctl stop glusterd does *not* kill the brick processes. On CentOS with gluster 3.10.x there is also a service, meant to only stop glusterfsd (brick processes). I think the reboot process may not be properly stopping glusterfsd or network or firewall may be stopped before glusterfsd and so the nodes go into the long timeout. Once again , in my case a simple
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance: [2017-09-08 09:31:48.381077] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded in the last 1 seconds, disconnecting. [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] (-->
2017 Sep 08
0
GlusterFS as virtual machine storage
Hi Diego, indeed glusterfsd processes are runnin and it is the reason I do server reboot instead of systemctl glusterd stop. Is killall different from reboot in a way glusterfsd processes are terminated in CentOS (init 1?)? However I will try this and let you know. -ps On Fri, Sep 8, 2017 at 12:19 PM, Diego Remolina <dijuremo at gmail.com> wrote: > The issue of I/O stopping may also
2017 Sep 08
2
GlusterFS as virtual machine storage
The issue of I/O stopping may also be with glusterfsd not being properly killed before rebooting the server. For example in RHEL 7.4 with official Gluster 3.8.4, the glusterd service does *not* stop glusterfsd when you run systemctl stop glusterd So give this a try on the nose you wish to reboot: 1. Stop glusterd 2. Check if glusterfsd processes are still running. If they are, use: killall
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote: > Back to replica 3 w/o arbiter. Two fio jobs
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
Hello, I'm having problems when write-behind is enabled on Gluster 3.8.4. I have 2 Gluster servers each with a single brick that is mirrored between them. The code causing these issues reads two data files each approx. 128G in size. It opens a third file, mmap()'s that file, and subsequently reads and writes to it. The third file, on sucessful runs (without write-behind enabled)
2017 Jun 17
1
client reconnect fails (was gluster heal entry reappears)
Hi Ravi, back to our client-cannot-reconnect-to-gluster-brick problem ... > Von: Ravishankar N [ravishankar at redhat.com] > Gesendet: Montag, 29. Mai 2017 06:34 > An: Markus Stockhausen; gluster-users at gluster.org > Betreff: Re: [Gluster-users] gluster heal entry reappears > > > On 05/28/2017 10:31 PM, Markus Stockhausen wrote: > > Hi, > > > > I'm
2017 Jun 14
0
No NFS connection due to GlusterFS CPU load
When executing the load test with the FIO tool, execute the following job from the client When executed, the load of 2 cores is high for the CPU. Up to 100%. At that time, if another client is performing NFS mounting, the df command I can not connect NFS without coming back. The log will continue to be output below. I believe that if the CPU utilization is distributed, the load will be eliminated.
2018 Feb 26
0
rpc/glusterd-locks error
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes all show the following in glusterd.log: [2018-02-26 14:31:47.330670] E
2017 Jul 03
0
Failure while upgrading gluster to 3.10.1
On Mon, 3 Jul 2017 at 12:28, Pawan Alwandi <pawan at platform.sh> wrote: > Hello Atin, > > I've gotten around to this and was able to get upgrade done using 3.7.0 > before moving to 3.11. For some reason 3.7.9 wasn't working well. > > On 3.11 though I notice that gluster/nfs is really made optional and > nfs-ganesha is being recommended. We have plans to
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs attached now. On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote: > > > On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> >> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote: >>
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin, I've gotten around to this and was able to get upgrade done using 3.7.0 before moving to 3.11. For some reason 3.7.9 wasn't working well. On 3.11 though I notice that gluster/nfs is really made optional and nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha on new clusters but would like to have glusterfs-gnfs on existing clusters so a seamless upgrade
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi, I've got troubles after few minutes of glusterfs operations. I setup a 4-node replica 4 storage, with 2 bricks on every server: # gluster volume create vms replica 4 transport tcp 192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1 192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2 192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2 I started copying files with
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote: Hey Guy's, Returning to this topic, after disabling the the quorum: cluster.quorum-type: none cluster.server-quorum-type: none I've ran into a number of gluster errors (see below). I'm using gluster as the backend for my NFS storage. I have gluster running on two nodes, nfs01 and nfs02. It's mounted on /n on each host. The path /n is
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system. I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case. ===== Symptoms ===== - Error from test
2011 Jun 09
1
NFS problem
Hi, I got the same problem as Juergen, My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0 Volume Name: poolsave Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/soft/gluster-data Brick2: ylal2960:/soft/gluster-data Options Reconfigured: diagnostics.brick-log-level: DEBUG network.ping-timeout: 20 performance.cache-size: 512MB
2017 Jun 15
1
peer probe failures
Hi, I'm having a similar issue, were you able to solve it? Thanks. Hey all, I've got a strange problem going on here. I've installed glusterfs-server on ubuntu 16.04: glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed] I can
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote: > On 4/9/2018 2:45 AM, Alex K wrote: > Hey Alex, > > With two nodes, the setup works but both sides go down when one node is > missing. Still I set the below two params to none and that solved my issue: > > cluster.quorum-type: none > cluster.server-quorum-type: none > > yes this disables