Displaying 10 results from an estimated 10 matches for "1linuxengineer".
2017 Jun 19
0
total outage - almost
...x1427a79086f14ed2902e3c18e133d02b
the "dirty" is 0, that's good, isn't it?
what's the "trusted.bit-rot.bad-file=0x3100" information?
Best Regards
Bernhard D?bi
BTW: I saved all logs, maybe I can upload them somewhere
2017-06-19 15:55 GMT+02:00 Bernhard D?bi <1linuxengineer at gmail.com>:
> Hi,
>
> we use a bunch of replicated gluster volumes as a backend for our
> backup. Yesterday I noticed that some synthetic backups failed because
> of I/O errors.
>
> Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads
> of I/O...
2017 Jun 19
2
total outage - almost
Hi,
we use a bunch of replicated gluster volumes as a backend for our
backup. Yesterday I noticed that some synthetic backups failed because
of I/O errors.
Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads
of I/O errors.
The brick log file shows the below errors
[2017-06-19 13:42:33.554875] E [MSGID: 116020]
[bit-rot-stub.c:566:br_stub_check_bad_object]
2017 Jul 16
0
Bug 1374166 or similar
Hi,
both Gluster servers were rebooted and now the unlink directory is clean.
Best Regards
Bernhard
2017-07-14 12:43 GMT+02:00 Bernhard D?bi <1linuxengineer at gmail.com>:
> Hi,
>
> yes, I mounted the Gluster volume and deleted the files from the
> volume not the brick
>
> mount -t glusterfs hostname:volname /mnt
> cd /mnt/some/directory
> rm -rf *
>
> restart of nfs-ganesha is planned for tomorrow. I'll keep you po...
2017 Jul 18
1
Bug 1374166 or similar
...s open fd.
In this case since lazy umount is performed, ganesha server may still
keep the fd's open by that client so gluster keeps
the unlink directory even though it is removed from fuse mount.
--
Jiffin
> Best Regards
> Bernhard
>
> 2017-07-14 12:43 GMT+02:00 Bernhard D?bi <1linuxengineer at gmail.com>:
>> Hi,
>>
>> yes, I mounted the Gluster volume and deleted the files from the
>> volume not the brick
>>
>> mount -t glusterfs hostname:volname /mnt
>> cd /mnt/some/directory
>> rm -rf *
>>
>> restart of nfs-ganesha is pl...
2017 Jul 14
2
Bug 1374166 or similar
Hi,
yes, I mounted the Gluster volume and deleted the files from the
volume not the brick
mount -t glusterfs hostname:volname /mnt
cd /mnt/some/directory
rm -rf *
restart of nfs-ganesha is planned for tomorrow. I'll keep you posted
BTW: nfs-ganesha is running on a separate server in standalone configuration
Best Regards
Bernhard
2017-07-14 10:43 GMT+02:00 Jiffin Tony Thottan <jthottan
2017 Nov 28
0
move brick to new location
Hello everybody,
we have a number of "replica 3 arbiter 1" or (2 + 1) volumes
because we're running out of space on some volumes I need to optimize
the usage of the physical disks. that means I want to consolidate
volumes with low usage onto the same physical disk. I can do it with
"replace-brick commit force" but that looks a bit drastic to me
because it immediately drops
2017 Jul 14
2
Bug 1374166 or similar
Hello everybody,
I'm in a similar situation as described in
https://bugzilla.redhat.com/show_bug.cgi?id=1374166
I have a gluster volume exported through ganesha. we had some problems
on the gluster server and the NFS mount on the client was hanging.
I did a lazy umount of the NFS mount on the client, then went to the
Gluster server, mounted the Gluster volume and deleted a bunch of
files.
2018 Apr 12
0
issues with replicating data to a new brick
Hello everybody,
I have some kind of a situation here
I want to move some volumes to new hosts. the idea is to add the new
bricks to the volume, sync and then drop the old bricks.
starting point is:
Volume Name: Server_Monthly_02
Type: Replicate
Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1:
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya,
what I can say so far:
it is working on a standalone system but not on the clustered system
from reading the ganesha wiki I have the impression that it is
possible to change the log level without restarting ganesha. I was
playing with dbus-send but so far was unsuccessful. if you can help me
with that, this would be great.
here some details about the tested machines. the nfs client
2017 Sep 29
2
nfs-ganesha locking problems
Hi,
I have a problem with nfs-ganesha serving gluster volumes
I can read and write files but then one of the DBAs tried to dump an
Oracle DB onto the NFS share and got the following errors:
Export: Release 11.2.0.4.0 - Production on Wed Sep 27 23:27:48 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.??All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition