Displaying 20 results from an estimated 1100 matches similar to: "Replace faulty host"
2023 Oct 27
1
Replace faulty host
Hi Markus,
It looks quite well documented, but please use?https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts?as 3.5?is the latest version for RHGS.
If the OS disks are failing, I would have tried?moving the data disks to the new machine and transferring the gluster files in /etc and /var/lib to the new node.
Any reason to reuse
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter.
/var/lib/glusterd/vols/gds-common is the upgraded aribiter
/home/marcus/gds-common is one of the other nodes still on gluster 10
diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common
5c5
< listen-port=60419
---
> listen-port=0
11c11
<
2023 Feb 20
2
Gluster 11.0 upgrade
Hi again Xavi,
I did some more testing on my virt machines
with same setup:
Number of Bricks: 1 x (2 + 1) = 3
If I do it the same way, I upgrade the arbiter first,
I get the same behavior that the bricks do not start
and the other nodes does not "see" the upgraded node.
If I upgrade one of the other nodes (non arbiter) and restart
glusterd on both the arbiter and the other the arbiter
2023 Feb 21
2
Gluster 11.0 upgrade
Hi Xavi,
Copy the same info file worked well and the gluster 11 arbiter
is now up and running and all the nodes are communication
the way they should.
Just another note on something I discovered on my virt machines.
All the three nodes has been upgarded to 11.0 and are working.
If I run:
gluster volume get all cluster.op-version
I get:
Option Value
------
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2024 Jan 19
1
Heal failure
Hi all,
I have a really strange problem with my cluster.
Running gluster 10.4, replicated with an arbiter:
Number of Bricks: 1 x (2 + 1) = 3
All my files in the system seems fine and I have not
found any broken files.
Even though I have 40000 files that needs healing,
in heal-count.
Heal fails for all the files over and over again.
If I use heal info I just get a long list of gfids
and trying
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2012 Sep 12
1
SNPRelate package error
Dear all,
I am using the R package SNPRelate but I found an error when I run the following command. Do you know what might be the problem? Thanks in advance.
> vcf.fn <- system.file("extdata", "sequence.vcf", package="SNPRelate")
> snpgdsVCF2GDS(vcf.fn, "test.gds")
Start snpgdsVCF2GDS ...
Open
2006 Dec 27
2
Google Desktop Search and R script files
I want to be able to search my saved R script files on my hard drive.
Thankfully the files are all saved with the .R filename extension which
means that "filetype:R" in the Google Desktop Search (GDS) box limits the
search to those files. Unfortunately if I put any other term in the search
box (for example, "hist" to find scripts where I have created a histogram)
then GDS does
2023 Oct 27
1
State of the gluster project
Hi Diego,
I have had a look at BeeGFS and is seems more similar
to ceph then to gluster. It requires extra management
nodes similar to ceph, right?
Second of all there are no snapshots in BeeGFS, as
I understand it.
I know ceph has snapshots so for us this seems a
better alternative. What is your experience of ceph?
I am sorry to hear about your problems with gluster,
from my experience we had
2023 Oct 27
2
State of the gluster project
Hi all,
I just have a general thought about the gluster
project.
I have got the feeling that things has slowed down
in the gluster project.
I have had a look at github and to me the project
seems to slow down, for gluster version 11 there has
been no minor releases, we are still on 11.0 and I have
not found any references to 11.1.
There is a milestone called 12 but it seems to be
stale.
I have hit
2023 Oct 27
1
State of the gluster project
Hi.
I'm also migrating to BeeGFS and CephFS (depending on usage).
What I liked most about Gluster was that files were easily recoverable
from bricks even in case of disaster and that it said it supported RDMA.
But I soon found that RDMA was being phased out, and I always find
entries that are not healing after a couple months of (not really heavy)
use, directories that can't be
2023 Oct 27
1
State of the gluster project
Maybe a bit OT...
I'm no expert on either, but the concepts are quite similar.
Both require "extra" nodes (metadata and monitor), but those can be
virtual machines or you can host the services on OSD machines.
We don't use snapshots, so I can't comment on that.
My experience with Ceph is limited to having it working on Proxmox. No
experience yet with CephFS.
BeeGFS is
2021 Nov 29
1
Gluster 10 used ports
Hi all,
Over the years I have been using the same ports in my firewall
for gluster 49152-49251 ( I know a bit too many ports but
local network with limited access)
Today I upgraded from version 9 to version 10 and it finally
went well until I ran:
gluster volume heal my-vol info summary
I got the answer:
Status: Transport endpoint is not connected
I realized that glusterfsd was using 50000+
2023 Dec 19
2
Gluster 11 OP version
Hi all,
We upgraded to gluster 11.1 and the OP version
was fixed in this version, so I changed the OP version
to 110000.
Now we have some obscure, vague problem.
Our users usually run 100+ processes with
GNU parallel and now the execution time have
increased close to the double.
I can see that there are a couple of heals happening every
now and then but this do not seem starange to me.
Just to
2023 Oct 27
1
State of the gluster project
It is very unfortunate that Gluster is not maintained. From Kadalu Technologies, we are trying to set up a small team dedicated to maintain GlusterFS for the next three years. This will be only possible if we get funding from community and companies. The details about the proposal is here?https://kadalu.tech/gluster/
About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few
2023 Oct 27
1
State of the gluster project
Hi,
Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
projects, so Gluster doesn't get much attention. From my experience, it has
deteriorated since about version 9.0, and we're migrating to alternatives.
/Z
On Fri, 27 Oct 2023 at 10:29, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all,
> I just have a general thought about the gluster
>