Displaying 4 results from an estimated 4 matches for "chglbcvtprd04".
2017 Sep 29
2
nfs-ganesha locking problems
...ond cluster only hosts the cluster.enable-shared-storage volume
across 3 nodes. it also runs nfs-ganesha in cluster configuration
(pacemaker, corosync). nfs-ganesha serves the volumes from the first
cluster.
Any idea what's wrong?
Kind Regards
Bernhard
CLUSTER 1 info
==============
root at chglbcvtprd04:/etc# cat os-release
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL=&qu...
2018 Apr 12
0
issues with replicating data to a new brick
...p the old bricks.
starting point is:
Volume Name: Server_Monthly_02
Type: Replicate
Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: chastcvtprd04:/data/glusterfs/Server_Monthly/2I-1-40/brick
Brick2: chglbcvtprd04:/data/glusterfs/Server_Monthly/2I-1-40/brick
Options Reconfigured:
features.scrub: Inactive
features.bitrot: off
nfs.disable: on
auth.allow: 127.0.0.1,10.30.28.43,10.30.28.44,10.30.28.17,10.30.28.18,10.8.13.132,10.30.28.30,10.30.28.31
performance.readdir-ahead: on
diagnostics.latency-measurement: o...
2017 Jun 19
0
total outage - almost
....afr.dirty=0x000000000000000000000000
trusted.bit-rot.bad-file=0x3100
trusted.bit-rot.signature=0x011400000000000000ee3e3ac6a79b8efc42d0904ca431cb20d01890d300c041e905d9d78a562bf276
trusted.bit-rot.version=0x14000000000000005841bb3c000ac813
trusted.gfid=0x1427a79086f14ed2902e3c18e133d02b
root at chglbcvtprd04:~# getfattr -d -e hex -m -
/data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_1050932/CHUNK_11126559/SFILE_CONTAINER_014
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_105093...
2017 Jun 19
2
total outage - almost
Hi,
we use a bunch of replicated gluster volumes as a backend for our
backup. Yesterday I noticed that some synthetic backups failed because
of I/O errors.
Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads
of I/O errors.
The brick log file shows the below errors
[2017-06-19 13:42:33.554875] E [MSGID: 116020]
[bit-rot-stub.c:566:br_stub_check_bad_object]