Displaying 20 results from an estimated 1000 matches similar to: "Error IPV6_V6ONLY"
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote:
Hey Guy's,
Returning to this topic, after disabling the the quorum:
cluster.quorum-type: none
cluster.server-quorum-type: none
I've ran into a number of gluster errors (see below).
I'm using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Feb 24
0
Failed heal volume
Whem I try to run healin volume i have this log errors and 3221 file not
healing:
[2018-02-24 15:32:00.915219] W [socket.c:3216:socket_connect] 0-glusterfs:
Error disabling sockopt IPV6_V6ONLY: "Protocollo non disponibile"
[2018-02-24 15:32:00.915854] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-02-24 15:32:01.925714] E
2018 Feb 23
1
Problem migration 3.7.6 to 3.13.2
Thanks for replay. I founded this values
datastore_temp.serda1.glusterfs-p1-b1.vol: option shared-brick-count 2
datastore_temp.serda1.glusterfs-p2-b2.vol: option shared-brick-count 2
datastore_temp.serda2.glusterfs-p1-b1.vol: option shared-brick-count 0
datastore_temp.serda2.glusterfs-p2-b2.vol: option shared-brick-count 0
I need to change the values of serda2 node?!
2018-02-23
2018 Feb 23
2
Problem migration 3.7.6 to 3.13.2
I have done a migration to new version of gluster but
when i doing a command
df -h
the result of space are minor of total.
Configuration:
2 peers
Brick serda2:/glusterfs/p2/b2 49152 0 Y
1560
Brick serda1:/glusterfs/p2/b2 49152 0 Y
1462
Brick serda1:/glusterfs/p1/b1 49153 0 Y
1476
Brick serda2:/glusterfs/p1/b1
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
> After seeing command history, I could see that you have 3 nodes, and
> firstly you are peer probing 51.15.90.60? and 163.172.151.120 from?
> 51.15.77.14
> So here itself you have 3 node cluster, after all this you are going
> on node 2 and again peer probing 51.15.77.14.
> ?Ideally it should work, with above steps, but due to some
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Feb 23
0
Problem migration 3.7.6 to 3.13.2
Hi Daniele,
Do you mean that the df -h output is incorrect for the volume post the
upgrade?
If yes and the bricks are on separate partitions, you might be running into
[1]. Can you search for the string "option shared-brick-count" in the files
in /var/lib/glusterd/vols/<volumename> and let us know the value? The
workaround to get this working on the cluster is available in [2].
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2010 Oct 08
1
IPV6_V6ONLY
Is there a particular reason that sshd sets IPV6_V6ONLY on listen
sockets?
----
Scott Neugroschl
XYPRO Technology Corporation
scott_n at xypro.com
805-583-2874
2009 Sep 13
6
[Bug 1648] New: Fix IPV6_V6ONLY for -L with -g
https://bugzilla.mindrot.org/show_bug.cgi?id=1648
Summary: Fix IPV6_V6ONLY for -L with -g
Product: Portable OpenSSH
Version: -current
Platform: All
OS/Version: Linux
Status: NEW
Severity: minor
Priority: P2
Component: ssh
AssignedTo: unassigned-bugs at mindrot.org
ReportedBy: jan.kratochvil
2017 Sep 09
2
GlusterFS as virtual machine storage
Yes, this is my observation so far.
On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it> wrote:
> Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
>
>> Sorry, I did not start the glusterfsd on the node I was shutting
>> yesterday and now killed another one during FUSE test, so it had to
>> crash immediately (only one of three nodes were actually
2017 Sep 09
0
GlusterFS as virtual machine storage
Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
> Sorry, I did not start the glusterfsd on the node I was shutting
> yesterday and now killed another one during FUSE test, so it had to
> crash immediately (only one of three nodes were actually up). This
> definitely happened for the first time (only one node had been killed
> yesterday).
>
> Using FUSE seems to be OK with
2024 Oct 14
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
First a heartfelt thanks for writing back.
In a solution (not having this issue) we do use nfs-ganesha to host filesystem squashfs root FS objects to compute nodes. It is working great. We also have fuse-through-LIO.
The solution here is 3 servers making up with cluster admin node.
The XFS issue is only observed when we try to replace an existing one with another XFS on top, and only with RAW,
2017 Sep 09
2
GlusterFS as virtual machine storage
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems to be OK with replica 3. So this can be gfapi related
or maybe rather libvirt related.
I tried
2008 Sep 03
0
IPV6_V6ONLY and UDP
I see in the release notes for 7.0; and seperately CURRENT on alpha;
that IPV6_V6ONLY is now fully supported for UDP.
Can someone point me to code/text that explains what exactly was broken
about it? In what case(s) udp sockets set with IPV6_V6ONLY accepted V4
packets?
Thanks in advance..
--
Jason Fesler, email/jabber <jfesler@gigo.com> resume: http://jfesler.com
"Give a man