similar to: Heketi v6.0.0 available for download

Displaying 20 results from an estimated 4000 matches similar to: "Heketi v6.0.0 available for download"

2017 Dec 18
1
Heketi v5.0.1 security release available for download
Heketi v5.0.1 is now available. This release[1] fixes a flaw that was found in heketi API that permits issuing of OS commands through specially crafted requests, possibly leading to escalation of privileges. More details can be obtained at CVE-2017-15103. [2] If authentication is turned "on" in heketi configuration, the flaw can be exploited only by those who possess authentication
2017 Jul 31
0
gluster-heketi-kubernetes
Adding more people to the thread. I am currently not able to analyze the logs. On Thu, Jul 27, 2017 at 5:58 AM, Bishoy Mikhael <b.s.mikhael at gmail.com> wrote: > Hi Talur, > > I've successfully got Gluster deployed as a DaemonSet using k8s spec file > glusterfs-daemonset.json from > https://github.com/heketi/heketi/tree/master/extras/kubernetes > > but then when I
2017 Jul 24
0
gluster-heketi-kubernetes
Hi Bishoy, Adding Talur who can help address your queries on Heketi. @wattsteve's github repo on glusterfs-kubernetes is a bit dated. You can either refer to gluster/gluster-kubernetes or heketi/heketi for current documentation and operational procedures. Regards, Vijay On Fri, Jul 21, 2017 at 2:19 AM, Bishoy Mikhael <b.s.mikhael at gmail.com> wrote: > Hi, > > I'm
2017 Jul 21
2
gluster-heketi-kubernetes
Hi, I'm trying to deploy Gluster and Heketi on a Kubernetes cluster I'm following the guide at https://github.com/gluster/gluster-kubernetes/ but the video referenced in the page is showing json files used while the git repo has only yaml files, they are quiet similar though, but Gluster is a deployment not a DaemonSet. I deploy Gluster DaemonSet successfully, but heketi is giving me the
2017 Jul 27
2
gluster-heketi-kubernetes
Hi Talur, I've successfully got Gluster deployed as a DaemonSet using k8s spec file glusterfs-daemonset.json from https://github.com/heketi/heketi/tree/master/extras/kubernetes but then when I try deploying heketi using heketi-deployment.json spec file, I end up with a CrashLoopBackOff pod. # kubectl get pods NAME READY STATUS RESTARTS AGE
2017 Aug 30
0
Unable to use Heketi setup to install Gluster for Kubernetes
Hi, I have the following setup in place: 1 node : RancherOS having Rancher application for Kubernetes setup 2 nodes : RancherOS having Rancher agent 1 node : CentOS 7 workstation having kubectl installed and folder cloned/downloaded from https://github.com/gluster/gluster-kubernetes using which i run Heketi setup (gk-deploy -g) I also have rancher-glusterfs-server container running with
2017 Oct 19
0
Trying to remove a brick (with heketi) fails...
Hello, I have a gluster cluster with 4 nodes, that is managed using heketi. I want to test the removeal of one node. We have several volumes on it, some with rep=2, others with rep=3. I get the following error: [root at CTYI1458 .ssh]# heketi-cli --user admin --secret "******" node remove 749850f8e5fd23cf6a224b7490499659 Error: Failed to remove device, error: Cannot replace brick
2017 Jul 26
0
Heketi and Geo Replication.
Hello, Is it possible to set up a Heketi Managed gluster cluster in one datacenter, and then have geo replication for all volumes to a second cluster in another datacenter? I've been looking at that, but haven't really found a recipe/solution for this. Ideally what I want is that when a volume is created in cluster1, that a slave volume is automatically created in cluster2, and
2018 Feb 28
0
Gluster Monthly Newsletter, February 2018
Gluster Monthly Newsletter, February 2018 Special thanks to all of our contributors working to get Gluster 4.0 out into the wild. Over the coming weeks, we?ll be posting on the blog about some of the new improvements coming out in Gluster 4.0, so watch for that! Glustered: A Gluster Community Gathering is happening on March 8, in connection with Incontro DevOps 2018. More details here:
2020 Sep 23
0
Re: consuming pre-created tap - with multiqueue
On Tue, Sep 22, 2020 at 01:48:08PM +0200, Miguel Duarte de Mora Barroso wrote: > Hello, > > On KubeVirt, we are trying to pre-create a tap device, then instruct > libvirt to consume it (via the type=ethernet , managed='no' > attributes). > > It works as expected, **unless** when we create a multi-queue tap device. > > The difference when creating the tap
2020 Sep 22
2
consuming pre-created tap - with multiqueue
Hello, On KubeVirt, we are trying to pre-create a tap device, then instruct libvirt to consume it (via the type=ethernet , managed='no' attributes). It works as expected, **unless** when we create a multi-queue tap device. The difference when creating the tap device is that we set the multi-queue flag; libvirt throws the following error when consuming it: ``` LibvirtError(Code=38,
2020 Nov 04
1
consume existing tap device when libvirt / qemu run as different users
Hello, I'm having some doubts about consuming an existing - already configured - tap device from libvirt (with `managed='no' ` attribute set). In KubeVirt, we want to have the consumer side of the tap device run without the NET_ADMIN capability, which requires the UID / GID of the tap creator / opener to match, as per the kernel code in [0]. As such, we create the tap device (with
2018 Jan 17
0
Does it make sense to upstream some MVT's?
Hi Sean, I had to add ‘v16f16’ to our out-of-tree target, and this was to primarily to allow me to express lowering for all the OpenCL types (well, except for the ‘v3T’ types). The trend does seem to be towards larger bit-width SIMD registers, and as you say this will increase in time; but perhaps instead of using a discrete enumeration combined with additional entries in several
2018 Jan 17
1
Does it make sense to upstream some MVT's?
On Tue, Jan 16, 2018 at 11:13 PM, Martin J. O'Riordan <MartinO at theheart.ie> wrote: > Hi Sean, > > > > I had to add ‘v16f16’ to our out-of-tree target, and this was to > primarily to allow me to express lowering for all the OpenCL types (well, > except for the ‘v3T’ types). > > > > The trend does seem to be towards larger bit-width SIMD registers, and
2017 Oct 01
0
Gluster Monthly Newsletter, September 2017
Gluster Monthly Newsletter, September 2017 Gluster Summit is coming! October 27-28 in Prague, Czech Republic! https://www.gluster.org/events/summit2017/ Registration is open until October 20th, 2017. Noteworthy Threads: [Gluster-devel] Proposed Protocol changes for 4.0: Need feedback. http://lists.gluster.org/pipermail/gluster-devel/2017-September/053603.html [Gluster-devel] Call for help:
2018 Apr 04
0
Gluster Monthly Newsletter, March 2018
Gluster 4.0! At long last, Gluster 4.0 is released! Read more at: https://www.gluster.org/announcing-gluster-4-0/ Other updates about Gluster 4.0: https://www.gluster.org/more-about-gluster-d2/ https://www.gluster.org/gluster-4-0-kubernetes/ Want to give us feedback about 4.0? We?ve got our retrospective open from now until April 11. https://www.gluster.org/4-0-retrospective/ Welcome piragua!
2018 Jun 29
2
Joining CentOS Storage SIG
On Fri, Jun 29, 2018 at 12:57:13PM +0530, Saravanakumar Arumugam wrote: ... > > /It will be great if we can add documentation about running CentOS based />/storage containers (like gluster / ceph containers). />/I can contribute here as well./ > > If you create a wiki login and pass your username, I can give you > > permissions to edit/add contents below > >
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <fakod666 at gmail.com> wrote: > Just created the container from here: https://github.com/gluster/ > gluster-containers/tree/master/CentOS > > And used stock Kubernetes 1.7.3, hence the included volume plugin and > Heketi version 4. > > ? Regardless of the glusterfs client version this is supposed to work. One patch
2017 Aug 10
2
Kubernetes v1.7.3 and GlusterFS Plugin
Just created the container from here: https://github.com/gluster/gluster-containers/tree/master/CentOS And used stock Kubernetes 1.7.3, hence the included volume plugin and Heketi version 4. Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. Aug. 2017, 18:49: > ?Thanks .. Its the same option. Can you let me know your glusterfs client > package version ?? >
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
As an another solution, if you are updating the system where you run application container to latest glusterfs ( 3.11) , this will be fixed as well as it support this mount option. --Humble On Thu, Aug 10, 2017 at 10:39 PM, Christopher Schmidt <fakod666 at gmail.com> wrote: > Ok, thanks. > > Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. >