Displaying 20 results from an estimated 2000 matches similar to: "Removing two bricks"
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2012 Jan 04
0
FUSE init failed
Hi,
I'm having an issue using the GlusterFS native client.
After doing a mount the filesystem appears mounted but any operation
results in a
Transport endpoint is not connected
message
gluster peer status and volume info don't complain.
I've copied the mount log below which mentions an error at fuse_init.
The kernel is based on 2.6.15 and FUSE api version is 7.3.
I'm using
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get:
[root at ovirt share]# ls
ls: reading directory .: Too many levels of symbolic links
[root at ovirt share]# ls -fl
ls: reading directory .: Too many levels of symbolic links
total 3636
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 ..
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there,
Im running glusterfs version 3.1.0.
The client crashed after sometime with below stack.
2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1:
Subvolume 'distribute-1' came back up; going online.
[2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1:
data self-heal triggered. path:
/streaming/set3/work/reduce.12.1294902171.dplog.temp,
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking into the gluster mailing list i found
a very similar case in
http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
but i'm not sure if this issue is fixed...
even this outage happened on glusterfs 3.7.18 which gets no more updates
since ~.20 i would kindly ask
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Apr 26
0
FreeBSD problem adding/removing replicated bricks
Hi Folks,
I'm trying to debug an issue that I've found while attempting to qualify
GlusterFS for potential distributed storage projects on the FreeBSD-11.1
server platform - using the existing package of GlusterFS v3.11.1_4
The main issue I've encountered is that I cannot add new bricks while
setting/increasing the replica count.
If I create a replicated volume "poc" on
2011 Aug 24
1
Adding/Removing bricks/changing replica value of a replicated volume (Gluster 3.2.1, OpenSuse 11.3/11.4)
Hi!
Until now, I use Gluster in a 2-server setup (volumes created with
replica 2).
Upgrading the hardware, it would be helpful to extend to volume to
replica 3 to integrate the new machine and adding the respective brick
and to reduce it later back to 2 and removing the respective brick when
the old machine is cancelled and not used anymore.
But it seems that this requires to delete and
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2014 Apr 06
2
libgfapi failover problem on replica bricks
Hello,
I'm having an issue with rebooting bricks holding images for live KVM
machines (using libgfapi).
I have a replicated+distributed setup of 4 bricks (2x2). The cluster
contains images for a couple of kvm virtual machines.
My problem is that when I reboot a brick containing a an image of a
VM, the VM will start throwing disk errors and eventually die.
The gluster volume is made like