similar to: gluster csi driver

Displaying 20 results from an estimated 4000 matches similar to: "gluster csi driver"

2023 Mar 29
1
gluster csi driver
Hi Joe, On Wed, Mar 29, 2023, 12:55 PM Joe Julian <me at joejulian.name> wrote: > I was chatting with Humble about the removed gluster support for > Kubernetes 1.26 and the long deprecated CSI driver. > > I'd like to bring it back from archive and maintain it. If anybody would > like to participate, that'd be great! If I'm just maintaining it for my > own use,
2023 Mar 29
1
gluster csi driver
Looking at this code, it's way more than I was looking for, too. I just need a replacement for the in-tree driver. I have a volume. I have about a half dozen pods that use that volume. I just need the same capabilities as the in-tree driver to satisfy that need. I want to use kadalu to replace the hacky thing I'm still doing using hostpath_pv, but last time I checked, it didn't build
2023 Oct 27
1
State of the gluster project
It is very unfortunate that Gluster is not maintained. From Kadalu Technologies, we are trying to set up a small team dedicated to maintain GlusterFS for the next three years. This will be only possible if we get funding from community and companies. The details about the proposal is here?https://kadalu.tech/gluster/ About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few
2017 Nov 12
1
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> Clarification for below logs: > > - 'dev_static' is the gluster volume. > - 'int-kube-01' is the gluster client. > - '10.51.70.151' is the first node in a three node (2 replica, 1 arbiter) gluster cluster. > - '/var/lib/kubelet/...../iss3dev-static' is a directory on the client that should be mounting
2023 Oct 27
1
State of the gluster project
Maybe a bit OT... I'm no expert on either, but the concepts are quite similar. Both require "extra" nodes (metadata and monitor), but those can be virtual machines or you can host the services on OSD machines. We don't use snapshots, so I can't comment on that. My experience with Ceph is limited to having it working on Proxmox. No experience yet with CephFS. BeeGFS is
2017 Nov 08
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> On 8 Nov 2017, at 9:03 pm, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > That is not the log for the mount. Please check /var/log/glusterfs/var-lib-mountedgluster.log on the system on which you are running the mount process. > > Please provide the volume config details as well (gluster volume info) from one of the server nodes. > Oh I'm sorry, I
2017 Jul 24
0
gluster-heketi-kubernetes
Hi Bishoy, Adding Talur who can help address your queries on Heketi. @wattsteve's github repo on glusterfs-kubernetes is a bit dated. You can either refer to gluster/gluster-kubernetes or heketi/heketi for current documentation and operational procedures. Regards, Vijay On Fri, Jul 21, 2017 at 2:19 AM, Bishoy Mikhael <b.s.mikhael at gmail.com> wrote: > Hi, > > I'm
2017 Jul 31
0
gluster-heketi-kubernetes
Adding more people to the thread. I am currently not able to analyze the logs. On Thu, Jul 27, 2017 at 5:58 AM, Bishoy Mikhael <b.s.mikhael at gmail.com> wrote: > Hi Talur, > > I've successfully got Gluster deployed as a DaemonSet using k8s spec file > glusterfs-daemonset.json from > https://github.com/heketi/heketi/tree/master/extras/kubernetes > > but then when I
2017 Jul 21
2
gluster-heketi-kubernetes
Hi, I'm trying to deploy Gluster and Heketi on a Kubernetes cluster I'm following the guide at https://github.com/gluster/gluster-kubernetes/ but the video referenced in the page is showing json files used while the git repo has only yaml files, they are quiet similar though, but Gluster is a deployment not a DaemonSet. I deploy Gluster DaemonSet successfully, but heketi is giving me the
2017 Jul 27
2
gluster-heketi-kubernetes
Hi Talur, I've successfully got Gluster deployed as a DaemonSet using k8s spec file glusterfs-daemonset.json from https://github.com/heketi/heketi/tree/master/extras/kubernetes but then when I try deploying heketi using heketi-deployment.json spec file, I end up with a CrashLoopBackOff pod. # kubectl get pods NAME READY STATUS RESTARTS AGE
2017 Sep 08
1
Redis db permission issue while running GitLab in Kubernetes with Gluster
Getting this answer back on the list in case anyone else is trying to share storage. Thanks for the docs pointer, Tanner. -John On Thu, Sep 7, 2017 at 6:50 PM, Tanner Bruce <tanner.bruce at farmersedge.ca> wrote: > You can set a security context on your pod to set the guid as needed: > https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ > > > This
2018 May 10
0
broken gluster config
also I have this "split brain"? [root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain /glusterp1/images/centos-server-001.qcow2
2017 Jun 01
0
Who's using OpenStack Cinder & Gluster? [ Was Re: [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder]
Joe, Agree with you on turning this around into something more positive. One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder & Gluster, can you please respond on this thread or to me & Niels in private
2017 Dec 04
0
Gluster Monthly Newsletter, November 2017
Gluster Monthly Newsletter, November 2017 Come find us at KubeCon/CloudNativeCon in Austin, December 6-8! Special sessions around Storage include: Thursday, December 7 ? 11:55am - 12:30pm Kubernetes Feature Prototyping with External Controllers and Custom Resource Definitions - Tomas Smetana, Red Hat
2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737 gluster v status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick glusterp1:/bricks/brick1/gv0 49152 0 Y 5229 Brick glusterp2:/bricks/brick1/gv0 49152 0 Y 2054 Brick
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote: Anyone? -------- Original Message -------- Subject: Persistent storage for docker containers from a Gluster volume Local Time: June 25, 2017 6:38 PM UTC Time: June 25, 2017 4:38 PM From: mabi at protonmail.ch To: Gluster Users <gluster-users at gluster.org> Hello, I have a two node replica 3.8 GlusterFS
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this, I cant find anything so far telling me how to fix it. Looking at http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1
2017 Jun 29
0
Persistent storage for docker containers from a Gluster volume
Hi, glusterFS is working fine for large files (in most of the cases it's used for VM image store), with docker you'll generate bunch of small size files and if you want to have a good performance may be look in [1] and [2]. Also two node replica is a bit dangerous in case of high load with small files there is a good risk of split brain situation, therefore think about arbiter
2018 Jul 29
2
[fdo] Postmortem: July 17th GitLab outage
Hi, On Tues Jul 17th, we had a full GitLab outage from 14:00 to 18:00 UTC, whilst attempting to upgrade the underlying storage. This was a semi-planned outage, which we'd hoped would last for approximately 30min. During the outage, the GitLab web UI and API, as well as HTTPS git clones through https://gitlab.freedesktop.org, were completely unavailable, giving connection timeout errors.