Displaying 20 results from an estimated 100000 matches similar to: "glusterfs 3.3 : set username and password"
2010 Oct 20
1
Glusterfs 3.1 with Ubuntu Lucid 32bit
I have posted this to the lists purely to help others - please do not
consider any of the following suitable for a production environment
and follow these rough instructions at your own risk.
Feel free to add your own additions to this mail posting as this may
or may not work for everybody!
I will not be held responsible for data loss, excessive CPU or mem
usage etc etc etc...
INSTALL NEEDED
2013 Nov 01
1
Hi all! Glusterfs Ipv6 support
Is Gluster 3.3.2 work with ipv6? I cant find options in CLI tu turn on and
cant fint anything about it in Admin guide.
When i search in google - i'm find workaround - add to volume config ( file
var/lib/glusterd/vols/cluster/volumename.hostname.place-metadirname )
string:
option transport.address-family inet6
to section:
volume cluster-server
type protocol/server
....
end-volume
2017 Aug 23
0
GlusterFS as virtual machine storage
What he is saying is that, on a two node volume, upgrading a node will
cause the volume to go down. That's nothing weird, you really should use
3 nodes.
On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> > Hi, after many VM crashes during upgrades of Gluster, losing network
> > connectivity on one node etc. I would
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems to be OK with replica 3. So this can be gfapi related
or maybe rather libvirt related.
I tried
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi,
On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote:
> Pavel.
>
> Is there a difference between native client (fuse) and libgfapi in regards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mount.glusterfs same volume on different server (not VM),
running
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure -
what is the reason for iops to stop/fail? Rebooting a node is somewhat
similar to updating gluster, replacing cabling etc. IMO this should not
always end up with arbiter blaming the other node and even though I did not
investigate this issue deeply, I do not believe the blame is the reason for
iops to drop.
On Sep 7, 2017
2017 Aug 23
0
GlusterFS as virtual machine storage
Really ? I can't see why. But I've never used arbiter so you probably
know more about this than I do.
In any case, with replica 3, never had a problem.
On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote:
> Hi, I believe it is not that simple. Even replica 2 + arbiter volume
> with default network.ping-timeout will cause the underlying VM to
> remount filesystem as
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we haven't had time to work on
libgfapi yet. Maybe that is more tolerant.
We have linux VMs mostly
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an
arbiter specific thing ? With replica 3 it just works.
On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote:
> you need to set
>
> cluster.server-quorum-ratio 51%
>
> On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
>
> > Hi all,
> >
>
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Aug 24
2
GlusterFS as virtual machine storage
That really isnt an arbiter issue or for that matter a Gluster issue. We
have seen that with vanilla NAS servers that had some issue or another.
Arbiter simply makes it less likely to be an issue than replica 2 but in
turn arbiter is less 'safe' than replica 3.
However, in regards to Gluster and RO behaviour
The default timeout for most OS versions is 30 seconds and the Gluster
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume
with default network.ping-timeout will cause the underlying VM to
remount filesystem as read-only (device error will occur) unless you
tune mount options in VM's fstab.
-ps
On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote:
> What he is saying is that, on a two node volume, upgrading a node will
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder
to achieve than with just replica 2 + arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
>
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all,
I have promised to do some testing and I finally find some time and
infrastructure.
So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
with disk accessible through gfapi. Volume group is set to virt
(gluster volume set gv_openstack_1 virt). VM runs current (all
packages updated) Ubuntu Xenial.
I set up
2017 Dec 07
0
Announcing GlusterFS release 3.13.0 (Short Term Maintenance)
This is a major release that includes a range of features enhancing
usability; enhancements to GFAPI for developers and a set of bug fixes.
* Addition of summary option to the heal info CLI
* Addition of checks for allowing lookups in AFR and removal of
'cluster.quorum-reads' volume option
* Support for max-port range in glusterd.vol
* Prevention of other processes accessing
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volume with arbiter (2+1) and VM on KVM (via
2020 Sep 01
0
Unable to create subdirectories/files in samba mount when using vfs objects = glusterfs
Hi Team,
I am trying to setup a samba CTDB cluster to export gluster volme as samba
share. While CTDB cluster works well, I ran into an issue with creating
subdirectories and also creating files/directories within subdictories when
accessing the share from both linux and windows servers.
Setup details :
We have a three node cluster with nodes snode1, snode2 and snode3. I have a
gluster
2018 Feb 12
0
Failed to get quota limits
Hi,
Can you provide more information like, the volume configuration, quota.conf
file and the log files.
On Sat, Feb 10, 2018 at 1:05 AM, mabi <mabi at protonmail.ch> wrote:
> Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
quota <volname> list" that my quotas on that volume are broken. The command
returns no output and no errors
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya,
thank you very much for your support and sorry for the late.
Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version:
gluster volume info
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (4 + 2) = 36
Transport-type: tcp
Bricks: