similar to: Replace broken host, keeping the existing bricks

Displaying 20 results from an estimated 10000 matches similar to: "Replace broken host, keeping the existing bricks"

2024 Jun 11
1
[EXT] Replace broken host, keeping the existing bricks
Hi, The method depends a bit if you use a distributed-only system (like me) or a replicated setting. I'm using a distributed-only setting (many bricks on different servers, but no replication). All my servers boot via network, i.e., on a start, it's like a new host. To rescue the old bricks, just set up a new server this the same OS, the same IP and and the same hostname (!very
2018 Mar 09
1
wrong size displayed with df after upgrade to 3.12.6
Hi Stefan, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. Please take a look at [1] and try out the workaround provided in the comments. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260 On 9 March 2018 at 13:37, Stefan Solbrig <stefan.solbrig at ur.de> wrote: > Dear all, > > I have a problem with df after
2018 Mar 09
2
wrong size displayed with df after upgrade to 3.12.6
Dear all, I have a problem with df after I upgraded from 3.12.4 to 3.12.6 All four bricks are shown als online, and all bricks are being used. gluster v stats shows the correct sizes for all devices. However, df does not show the correct glusterfs volume size. It seems to me that it "forgets" one brick. Although all bricks are used when I'm writing files. best wishes, Stefan
2018 Jan 21
1
mkdir -p, cp -R fails
Dear all, I have problem with glusterfs 3.12.4 mkdir -p fails with "no data available" when umask is 0022, but works when umask is 0002. Also recursive copy (cp -R or cp -r) fails with "no data available", independly of the umask. See below for an example to reproduce the error. I already tried to change transport from rdma to tcp. (Changing the transport works, but
2017 Dec 11
2
active/active failover
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8) So my question is: can I really use glusterfs to do failover in the way described
2017 Dec 12
1
active/active failover
Hi Alex, Thank you for the quick reply! Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2017 Dec 11
0
active/active failover
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up. Chances are if you weren't using the SAN volumes; you could have purchased two servers
2017 Dec 18
2
interval or event to evaluate free disk space?
Hi all, with the option "cluster.min-free-disk" set, glusterfs avoids placing files bricks that are "too full". I'd like to understand when the free space on the bricks is calculated. It seems to me that this does not happen for every write call (naturally) but at some interval or that some other event triggers this. i.e, if I write two files quickly (that together
2018 May 30
2
RDMA inline threshold?
Forgot to mention, sometimes I have to do force start other volumes as well, its hard to determine which brick process is locked up from the logs. Status of volume: rhev_vms_primary Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
2018 Apr 25
3
Problem adding replicated bricks on FreeBSD
Hi Folks, I'm trying to debug an issue that I've found while attempting to qualify GlusterFS for potential distributed storage projects on the FreeBSD-11.1 server platform - using the existing package of GlusterFS v3.11.1_4 The main issue I've encountered is that I cannot add new bricks while setting/increasing the replica count. If I create a replicated volume "poc"
2018 May 29
2
RDMA inline threshold?
Dear all, I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all. The respective log file shows this
2018 May 30
0
RDMA inline threshold?
Dear Dan, thanks for the quick reply! I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding). I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose, Gluster is working as expected. The Distribute-replicated type just means that there are now 2 replica sets and files will be distributed across them. A volume of type Replicate (1xn where n is the number of bricks in the replica set) indicates there is no distribution (all files on the volume will be present on all the bricks in the volume). A volume of type Distributed-Replicate
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it: https://bugzilla.redhat.com/show_bug.cgi?id=1491059 https://bugzilla.redhat.com/show_bug.cgi?id=1491060 Please see the above bugs for full detail. In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are: a. brick pid file leaves stale pid and brick fails
2018 Apr 26
0
Problem adding replicated bricks on FreeBSD
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger <mark.staudinger at nyi.net> wrote: > Hi Folks, > I'm trying to debug an issue that I've found while attempting to qualify > GlusterFS for potential distributed storage projects on the FreeBSD-11.1 > server platform - using the existing package of GlusterFS v3.11.1_4 > The main issue I've encountered is that I
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. Thanks Jose [root at gluster01 ~]# gluster volume status Status of volume: scratch Gluster process
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbalacha at redhat.com> wrote: > ---------- Forwarded message ---------- > From: Jose Sanchez <josesanc at carc.unm.edu> > Date: 11 January 2018 at 22:05 > Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks > each. > To: Nithya Balachandran <nbalacha at redhat.com> > Cc: gluster-users
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
---------- Forwarded message ---------- From: Jose Sanchez <josesanc at carc.unm.edu> Date: 11 January 2018 at 22:05 Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each. To: Nithya Balachandran <nbalacha at redhat.com> Cc: gluster-users <gluster-users at gluster.org> Hi Nithya Thanks for helping me with this, I understand now , but I have few
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a
2018 May 30
0
RDMA inline threshold?
Stefan, Sounds like a brick process is not running. I have notice some strangeness in my lab when using RDMA, I often have to forcibly restart the brick process, often as in every single time I do a major operation, add a new volume, remove a volume, stop a volume, etc. gluster volume status <vol> Does any of the self heal daemons show N/A? If that's the case, try forcing a restart on