similar to: libgfapi failover problem on replica bricks

Displaying 20 results from an estimated 1300 matches similar to: "libgfapi failover problem on replica bricks"

2014 Mar 28
2
Possible to use libgfapi with libvirt in CentOS 6.5?
Good Evening, I have read that libgfapi has been backported to qemu-kvm in RHEL 6.5 (and by virtue CentOS and SL). However I am unable to figure out how to actually make it work as described. Virt-manager still only seems to support glusterfs volumes via fuse. I can use qemu-img to create a disk image on gluster://<server>/<Volume>. But virt-manager can only use it from a fuse
2013 Oct 11
1
libvirt and libgfapi in RHEL 6.5 beta
Dear All, Very pleased to see that the Redhat 6.5 beta promises "Native Support for GlusterFS in QEMU allows native access to GlusterFS volumes using the libgfapi library" Can I ask if virt-manager & libvirt can control libgfapi mounts? :) or do I need to use ovirt? :( many thanks Jake
2012 Dec 10
0
does libgfapi and fuse mount performance matches?
Dear gluster experts, I recently want to use jna to bind libgfapi to java. And I have one question. Does libgfapi and fuse mount performance match? The performance translators works for libgfapi too? Thank you very much. -- ???
2014 Dec 08
0
libgfapi disk locking in virtlockd not working
Hello. I'm playing with libgfapi network disks, over IB and all is working fine, but not disk locking (and true rdma transport). I use virtlockd, and with fuse mount, locking works as expected. But when i converted disk definitions to libgfapi, locks are not created (but qemu starts and works fine). I used direct and indirect locking - same result : qemu working fine, no locks. my
2015 Jun 03
2
Qemu-Libgfapi: periodical shutdown of virtual machines.
Hello, Would you so kind to help me with my problem concerning libgfapi. My host operating system is Ubuntu 14.04 LTS, version of glusterfs is 3.6.2, and version of qemu is 2.0.0. In our environment guest virtual machines periodically go to power-off state 'Powered Off'(Shutdown) with ERROR in /var/log/syslog like: kernel: [5346607.988173] qemu-system-x86[29564]: segfault at 128 ip
2013 Jan 04
1
libgfapi docs
Hi guys: Whats the status for documentation on the libgfapi. -- Jay Vyas http://jayunit100.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130104/f659c786/attachment.html>
2017 Jun 02
0
libgfapi with encryption?
Hi, I created an encrypted volume which appears to be working fine with FUSE but the volume is supposed to store VM images (master key in place). I noticed some references in libgfapi source code to encryption so I decided to try it out. While attempting to create an image: # qemu-img create -f qcow2 gluster://gluster01/virt0/testing.img 30G Formatting
2015 Jun 08
0
Re: Qemu-Libgfapi: periodical shutdown of virtual machines.
On 03.06.2015 09:58, Igor Yakovlev wrote: > Hello, > > Would you so kind to help me with my problem concerning libgfapi. > > My host operating system is Ubuntu 14.04 LTS, version of glusterfs is > 3.6.2, and version of qemu is 2.0.0. > > In our environment guest virtual machines periodically go to power-off > state 'Powered Off'(Shutdown) with ERROR in
2016 Apr 11
0
High Guest CPU Utilization when using libgfapi
Hi, I am currently testing running Openstack instance on Cinder volume with libgfapi. This instance is the Windows instance and i found that when running random 4k write workload, the CPU utilization is very high, 90% CPU utilization with about 86% in privileged time. I also tested the workload with volume from NFS and the CPU utilization is only around 5%. For gluster fuse, the CPU utilization
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a virtualization system. In this moment I have a very basic test configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly interconnected. Gluster configuration is replica 2. OS is Fedora 27 For my tests I used dd and I found strange results. Apparently the
2011 Jul 04
1
writeLines + foreach/doMC
Hi I'm processing sequencing data trying to collapsing the locations of each unique sequence and write the results to a file (as storing that in a table will require 10GB mem at least) so I wrote a function that, given a sequence id, provide the needed line to be stored library(doMC) # load library registerDoMC(12) # assign the Number of CPU
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID:
2011 Oct 20
1
Expand replicated bricks to more nodes/change replica count
Hi list I have set up several volumes on a two-node Gluster setup using "replica 2" configurations. I would like to add two more nodes to the trusted pool so that all volumes are replicated on 4 nodes. I wonder if that can be done online, but after doing some research on this I didn't find evidence that it's possible to change the replica count _after_ the volume has been
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2011 Aug 24
1
Adding/Removing bricks/changing replica value of a replicated volume (Gluster 3.2.1, OpenSuse 11.3/11.4)
Hi! Until now, I use Gluster in a 2-server setup (volumes created with replica 2). Upgrading the hardware, it would be helpful to extend to volume to replica 3 to integrate the new machine and adding the respective brick and to reduce it later back to 2 and removing the respective brick when the old machine is cancelled and not used anymore. But it seems that this requires to delete and
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through. ---------- Forwarded message ---------- From: Martin Toth <snowmailer at gmail.com> Date: Thu, Sep 21, 2017 at 9:17 AM Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] To: gluster-users at gluster.org Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com Hello all fellow GlusterFriends, I would like you to comment /
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi, Please let us know what commands you ran so far and the output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
Hello We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. Node 1 [ (Brick A) (Brick B) ] Node 2 [ (Brick A) (Brick
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good. Remember to back up Gluster config files before update: /etc/glusterfs /var/lib/glusterd If you are *not* on the latest 3.7.x, you are unlikely to be able to go back to it because PPA only keeps the latest version of each major branch, so keep that in mind. With Ubuntu, every time you update, make sure to download and keep a manual copy of the .Deb files. Otherwise you