Displaying 20 results from an estimated 2000 matches similar to: "Experiencing errors after adding new nodes"
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get:
[root at ovirt share]# ls
ls: reading directory .: Too many levels of symbolic links
[root at ovirt share]# ls -fl
ls: reading directory .: Too many levels of symbolic links
total 3636
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 ..
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
This has been solved, as far as we can tell.
Problem was with KillUserProcesses=1 in logind.conf. This has shown to kill mounts made using mount -a booth by root and by any user with sudo at session logout.
Hope this will anybody else who run into this.
Thanks 4 all your help and
cheers
Gabbe
1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg <gabriel.lindeborg at
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
All four clients did run 3.10.2 as well
The volumes has been running fine until we upgraded to 3.10, when we hit some issues with port mismatches. We restarted all the volumes, the servers and the clients and now hit this issue.
We?ve since backed up the files, remove the volumes, removed the bricks, removed gluster, installed glusterfs 3.7.20, created new volumes on new bricks, restored the
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi
I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate
volume to 3x2 and performing a full rebalance fuse clients log the
following messages for every directory access:
[2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.953065] I
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
On Thu, Jun 01, 2017 at 01:52:23PM +0000, Gabriel Lindeborg wrote:
> This has been solved, as far as we can tell.
>
> Problem was with KillUserProcesses=1 in logind.conf. This has shown to
> kill mounts made using mount -a booth by root and by any user with
> sudo at session logout.
Ah, yes, that could well be the cause of the problem.
> Hope this will anybody else who run
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
Hello,
Have spotted strange behaviour of GlusterFS fuse mount. I am unable to list files in a directory until parent directory is listed. However if I do list file with full path it is listed on some client nodes.
Example:
localadmin at ldgpsua00000038:~$ ls -al /var/lib/nova/instances/_base/
ls: cannot access /var/lib/nova/instances/_base/: No such file or directory
localadmin at
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from
2013 Sep 19
0
dht_layout_dir_mismatch
Having an odd problem on a new test environment we are setting up for a
partner. And not sure where to look next to figure out the problem or
really understand what the dht_layout_dir_mismatch INFO message is
telling me.
I was turning up a 4 node distributed volume, each brick is its own
19TB ext4 partition on a hardware raid5. Each node has the volume
mounted back to itself at /glusterfs via
2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
Hi I'm running into a rather strange and frustrating bug and wondering if
anyone on the mailing list might have some insight about what might be
causing it. I'm running a cluster of two dozen nodes, where the processing
nodes are also the gluster bricks (using the SLURM resource manager). Each
node has the glusters mounted natively (not NFS). All nodes are using
v3.2.7. Each job in the
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete
information than I did the first time around. The full rebalance log
from the machine where I started the rebalance can be found at the
following link. It is slightly redacted - one search/replace was made
to replace an identifying word with REDACTED.
https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2010 Dec 15
0
Errors with Gluster 3.1.2qa2
Hi all,
I have just migrated my old gluster partition to a fresh one with 4 nodes
with:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
It solves my problems of latency and disk errors (like input/output errors
or file descriptor in bad state) but I have just many many errors like this:
[2010-12-15 12:01:12.711136] I [dht-common.c:369:dht_revalidate_cbk]
2012 Jan 04
0
FUSE init failed
Hi,
I'm having an issue using the GlusterFS native client.
After doing a mount the filesystem appears mounted but any operation
results in a
Transport endpoint is not connected
message
gluster peer status and volume info don't complain.
I've copied the mount log below which mentions an error at fuse_init.
The kernel is based on 2.6.15 and FUSE api version is 7.3.
I'm using