Displaying 20 results from an estimated 7000 matches similar to: "Errors with Gluster 3.1.2qa2"
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi,
I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster.
Now I am getting permission denied errors and I see the following in the client logs:
[2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument)
[2011-02-24 09:59:11.851656] I
2013 Feb 26
0
Replicated Volume Crashed
Hi,
I have a gluster volume that consists of 22Bricks and includes a single
folder with 3.6 Million files. Yesterday the volume crashed and turned out
to be completely unresposible and I was forced to perform a hard reboot on
all gluster servers because they were not able to execute a reboot command
issued by the shell because they were that heavy overloaded. Each gluster
server has 12 CPU cores
2013 Jun 17
0
gluster client timeouts / found conflict
Hi list
Recently I've experienced more and more input/output errors from my most
write heavy gluster filesystem.
The logfile on the gluster servers show nothing, but the client(s) that get
the input/output errors (and timeouts) will as far as I can tell get errors
such as :
[2013-06-14 15:55:56] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse:
LOOKUP(/369/60702093) inode (ptr=0x1efd440,
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi
I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate
volume to 3x2 and performing a full rebalance fuse clients log the
following messages for every directory access:
[2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.953065] I
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there,
Im running glusterfs version 3.1.0.
The client crashed after sometime with below stack.
2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1:
Subvolume 'distribute-1' came back up; going online.
[2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1:
data self-heal triggered. path:
/streaming/set3/work/reduce.12.1294902171.dplog.temp,
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2013 May 02
0
GlusterFS mount does not list directory content until parent directory is listed
Hello,
Have spotted strange behaviour of GlusterFS fuse mount. I am unable to list files in a directory until parent directory is listed. However if I do list file with full path it is listed on some client nodes.
Example:
localadmin at ldgpsua00000038:~$ ls -al /var/lib/nova/instances/_base/
ls: cannot access /var/lib/nova/instances/_base/: No such file or directory
localadmin at
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2013 Sep 19
0
dht_layout_dir_mismatch
Having an odd problem on a new test environment we are setting up for a
partner. And not sure where to look next to figure out the problem or
really understand what the dht_layout_dir_mismatch INFO message is
telling me.
I was turning up a 4 node distributed volume, each brick is its own
19TB ext4 partition on a hardware raid5. Each node has the volume
mounted back to itself at /glusterfs via
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2011 May 07
1
Gluster "Peer Rejected"
Hello All,
I have 8 servers.?
7 of the 8 say that gbe02 is in state State: Peer Rejected (Connected).
gbe08 says it is connected to the other?7? but they are all State: Peer Rejected
(Connected)
So it would appear that gbe02 is out of sync with the group.
I triggered a manual self heal by doing a the recommended ./find on a gluster
mount.
I'm stuck... I cannot find ANY docs on this
2018 May 23
0
Rebalance state stuck or corrupted
We have had a rebalance operation going on for a few days. After a couple
days the rebalance status said "failed". We stopped the rebalance operation
by doing gluster volume rebalance gv0 stop. Rebalance log indicated gluster
did try to stop the rebalance. However, when we try now to stop the volume
or try to restart rebalance it says there's a rebalance operation going on
and volume
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
This has been solved, as far as we can tell.
Problem was with KillUserProcesses=1 in logind.conf. This has shown to kill mounts made using mount -a booth by root and by any user with sudo at session logout.
Hope this will anybody else who run into this.
Thanks 4 all your help and
cheers
Gabbe
1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg <gabriel.lindeborg at
2009 Jun 26
0
Error when expand dht model volumes
HI all:
I met a problem in expending dht volumes, i write in a dht storage directory untile it grew up to 90%,so i add four new volumes into the configur file.
But when start again ,the data in directory some disappeared ,Why ??? Was there a special action before expending the volumes?
my client cofigure file is this :
volume client1
type protocol/client
option transport-type
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
On Thu, Jun 01, 2017 at 01:52:23PM +0000, Gabriel Lindeborg wrote:
> This has been solved, as far as we can tell.
>
> Problem was with KillUserProcesses=1 in logind.conf. This has shown to
> kill mounts made using mount -a booth by root and by any user with
> sudo at session logout.
Ah, yes, that could well be the cause of the problem.
> Hope this will anybody else who run