Displaying 20 results from an estimated 1000 matches similar to: "Gluster cluster on two networks"
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter.
/var/lib/glusterd/vols/gds-common is the upgraded aribiter
/home/marcus/gds-common is one of the other nodes still on gluster 10
diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common
5c5
< listen-port=60419
---
> listen-port=0
11c11
<
2023 Feb 20
2
Gluster 11.0 upgrade
Hi again Xavi,
I did some more testing on my virt machines
with same setup:
Number of Bricks: 1 x (2 + 1) = 3
If I do it the same way, I upgrade the arbiter first,
I get the same behavior that the bricks do not start
and the other nodes does not "see" the upgraded node.
If I upgrade one of the other nodes (non arbiter) and restart
glusterd on both the arbiter and the other the arbiter
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the
2023 Feb 21
2
Gluster 11.0 upgrade
Hi Xavi,
Copy the same info file worked well and the gluster 11 arbiter
is now up and running and all the nodes are communication
the way they should.
Just another note on something I discovered on my virt machines.
All the three nodes has been upgarded to 11.0 and are working.
If I run:
gluster volume get all cluster.op-version
I get:
Option Value
------
2023 Oct 25
1
Replace faulty host
Hi all,
I have a problem with one of our gluster clusters.
This is the setup:
Volume Name: gds-common
Type: Distributed-Replicate
Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6
Status: Started
Snapshot Count: 26
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: urd-gds-031:/urd-gds/gds-common
Brick2: urd-gds-032:/urd-gds/gds-common
Brick3: urd-gds-030:/urd-gds/gds-common
2023 Oct 27
1
Replace faulty host
Hi Markus,
It looks quite well documented, but please use?https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts?as 3.5?is the latest version for RHGS.
If the OS disks are failing, I would have tried?moving the data disks to the new machine and transferring the gluster files in /etc and /var/lib to the new node.
Any reason to reuse
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Dec 05
4
SAMBA VFS module for GlusterFS crashes
Hello,
I'm trying to set up a SAMBA server serving a GlusterFS volume.
Everything works fine if I locally mount the GlusterFS volume (`mount
-t glusterfs ...`) and then serve the mounted FS through SAMBA, but
the performance is slower by a 2x/3x compared to a SAMBA server with a
local ext4 filesystem.
I gather that SAMBA vfs_glusterfs module can give better
performance. However, as soon as I
2018 Jan 23
2
Understanding client logs
Hi all,
I have problem pin pointing an error, that users of
my system experience processes that crash.
The thing that have changed since the craches started
is that I added a gluster cluster.
Of cause the users start to attack my gluster cluster.
I started looking at logs, starting from the client side.
I just need help to understand how to read it in the right way.
I can see that every ten
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi,
After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
[2016-11-02 14:26:41.864075] I [MSGID:
2018 Jan 23
0
Understanding client logs
Marcus,
Please paste the name-version-release of the primary glusterfs package on
your system.
If possible, also describe the typical workload that happens at the mount
via the user application.
On Tue, Jan 23, 2018 at 7:43 PM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all,
> I have problem pin pointing an error, that users of
> my system experience processes that
2024 Jan 19
1
Heal failure
Hi all,
I have a really strange problem with my cluster.
Running gluster 10.4, replicated with an arbiter:
Number of Bricks: 1 x (2 + 1) = 3
All my files in the system seems fine and I have not
found any broken files.
Even though I have 40000 files that needs healing,
in heal-count.
Heal fails for all the files over and over again.
If I use heal info I just get a long list of gfids
and trying
2012 Sep 12
1
SNPRelate package error
Dear all,
I am using the R package SNPRelate but I found an error when I run the following command. Do you know what might be the problem? Thanks in advance.
> vcf.fn <- system.file("extdata", "sequence.vcf", package="SNPRelate")
> snpgdsVCF2GDS(vcf.fn, "test.gds")
Start snpgdsVCF2GDS ...
Open