Displaying 20 results from an estimated 3000 matches similar to: "Persistent storage for docker containers from a Gluster volume"
2017 Jun 28
0
Persistent storage for docker containers from a Gluster volume
Anyone?
> -------- Original Message --------
> Subject: Persistent storage for docker containers from a Gluster volume
> Local Time: June 25, 2017 6:38 PM
> UTC Time: June 25, 2017 4:38 PM
> From: mabi at protonmail.ch
> To: Gluster Users <gluster-users at gluster.org>
> Hello,
> I have a two node replica 3.8 GlusterFS cluster and am trying to find out the best way
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote:
Anyone?
-------- Original Message --------
Subject: Persistent storage for docker containers from a Gluster volume
Local Time: June 25, 2017 6:38 PM
UTC Time: June 25, 2017 4:38 PM
From: mabi at protonmail.ch
To: Gluster Users <gluster-users at gluster.org>
Hello,
I have a two node replica 3.8 GlusterFS
2017 Jun 29
0
Persistent storage for docker containers from a Gluster volume
Hi,
glusterFS is working fine for large files (in most of the cases it's
used for VM image store), with docker you'll generate bunch of small
size files and if you want to have a good performance may be look in [1]
and [2].
Also two node replica is a bit dangerous in case of high load with small
files there is a good risk of split brain situation, therefore think
about arbiter
2014 Jan 30
2
Notes on building libguestfs in a systemd-nspawn container
Last night I was tinkering with `systemd-nspawn` -- namespace based
container for testing, I thought I'll post what I tried with libguestfs
here:
Prerequisite
------------
Because of an audit subsystem incompatibility bug - rhbz#966807[1], turn
off auditing by booting the host w/ 'audit=0' on Kernel command line.
(NOTE: There's work in progress[2] in upstream Kernel to fix
2023 Aug 07
1
Samba-AD in Docker
On Monday, August 7, 2023 6:05:03 AM EDT Andrew Bartlett via samba wrote:
> https://github.com/samba-in-kubernetes
>
Andrew, thanks a bunch for pointing people to our org! I really appreciate it.
More below...
> On Mon, 2023-08-07 at 11:08 +0200, Joachim Lindenberg via samba wrote:
> > Hello Anantha, Michael,
> > IIRC this is somewhat optimistic or a secret sausage. For
2015 Jul 13
1
Kubernetes 1.0.0 in virt7-docker-common
Hi,
Latest kubernetes-1.0.0-0.2.git2c27b1f.el7 is updated in
virt7-docker-common repo [1].
Please test and feedback.
[1]
http://cbs.centos.org/repos/virt7-docker-common-candidate/x86_64/os/Packages/
--
Navid
2014 Jan 30
0
Re: Notes on building libguestfs in a systemd-nspawn container
On 01/30/2014 07:41 AM, Kashyap Chamarthy wrote:
> Last night I was tinkering with `systemd-nspawn` -- namespace based
> container for testing, I thought I'll post what I tried with libguestfs
> here:
>
>
> Prerequisite
> ------------
>
> Because of an audit subsystem incompatibility bug - rhbz#966807[1], turn
> off auditing by booting the host w/
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 5:58 AM
> UTC Time: August 28, 2017 3:58 AM
> From: ravishankar at redhat.com
>
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS.
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 10:41 AM
> UTC Time: August 28, 2017 8:41 AM
>
2017 Aug 28
0
self-heal not working
Great, can you raise a bug for the issue so that it is easier to keep
track (plus you'll be notified if the patch is posted) of it? The
general guidelines are @
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Reporting-Guidelines
but you just need to provide whatever you described in this email thread
in the bug:
i.e. volume info, heal info, getfattr and stat output of
2017 Aug 27
2
self-heal not working
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Ravishankar N" <ravishankar at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
> Sent: Sunday, August 27, 2017 3:15:33 PM
> Subject: Re: [Gluster-users] self-heal not working
>
>
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:29 PM, mabi wrote:
> Excuse me for my naive questions but how do I reset the afr.dirty
> xattr on the file to be healed? and do I need to do that through a
> FUSE mount? or simply on every bricks directly?
>
>
Directly on the bricks: `setfattr -n trusted.afr.dirty -v
0x000000000000000000000000
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like "
got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
Anyway I reproduced it by manually setting the afr.dirty bit for a zero
byte file on all 3 bricks. Since there are no afr pending xattrs
indicating good/bad copies and all files are zero bytes, the data
self-heal algorithm just picks the
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first:
- In order to launch the index heal is the following command correct:
gluster volume heal myvolume
- If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking.
> --------
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote:
> Thanks for the additional hints, I have the following 2 questions first:
>
> - In order to launch the index heal is the following command correct:
> gluster volume heal myvolume
>
Yes
> - If I run a "volume start force" will it have any short disruptions
> on my clients which mount the volume through FUSE? If yes, how long?
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail.
The volume concerned is called myvol-pro, the other 3 volumes have no problem so far.
Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15:
https://bugzilla.redhat.com/show_bug.cgi?id=1471613
Is it possible that the problem I described in this post is related to that bug?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishankar at
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:57 AM, Ben Turner wrote:
> ----- Original Message -----
>> From: "mabi" <mabi at protonmail.ch>
>> To: "Ravishankar N" <ravishankar at redhat.com>
>> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
>> Sent: Sunday, August 27, 2017 3:15:33 PM
>>
2017 Aug 27
0
self-heal not working
Thanks Ravi for your analysis. So as far as I understand nothing to worry about but my question now would be: how do I get rid of this file from the heal info?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 27, 2017 3:45 PM
> UTC Time: August 27, 2017 1:45 PM
> From: ravishankar at redhat.com
> To: mabi <mabi at
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the
afr.volname-client-xx xattr.
`gluster volume set myvolume diagnostics.client-log-level DEBUG` is right.
On 08/23/2017 10:31 PM, mabi wrote:
> I just saw the following bug which was fixed in 3.8.15:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1471613
>
> Is it possible that the problem I described in this post is