Displaying 20 results from an estimated 7000 matches similar to: "High CPU Usage After Glusterfs install"
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2011 Oct 18
2
gluster rebalance taking three months
Hi guys,
we have a rebalance running on eight bricks since July and this is
what the status looks like right now:
===Tue Oct 18 13:45:01 CST 2011 ====
rebalance step 1: layout fix in progress: fixed layout 223623
There are roughly 8T photos in the storage,so how long should this
rebalance take?
What does the number (in this case) 22362 represent?
Our gluster infomation:
Repository
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it.
Like this :
# getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e
# file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e
trusted.gfid=0x00462be83e6149318bdadae1645c639e
trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2024 Jun 26
1
Confusion supreme
Hello all
I have a mail store on a volume replica 3 with no arbiter. A while
ago the disk of one of the bricks failed and I was several days
late to notice it. When I did, I removed that brick from the volume,
replaced the failed disk, updated the OS on that machine from el8
to el9 and gluster on all three nodes from 10.3 to 11.1, added back
the brick and started a heal. Things appeared to work
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie,
with XDRs and how it is used). Just glance the logs of the client process
where you saw the errors, which could give some hints. If you don't
understand the logs, share them, so we will try to look into it.
-Amar
On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote:
> I recently
2008 Dec 18
3
Feedback and Questions on afr+unify
Hi,
I just installed and configured a couple of machines with glusterfs
(1.4.0-rc3). It seems to work great. Thanks for the amazing software.!
I've been looking for something like this for years.
I have some feedback and questions. My configuration is a bit
complicated. I have two machines each with two disks and each of which
with two partitions that I wanted to use (i.e. 8
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.
I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->