Displaying 20 results from an estimated 100 matches similar to: "Question regarding if statement in while loop"
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
NODE 1:
File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
Size: 0 Blocks: 38 IO Block: 131072 regular
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote:
> Dear all,
>
> I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
>
> It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all,
I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi,
Please fine below the answers to your questions
1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume:
Option Value
------ -----
cluster.quorum-type none
2) The .shareKey
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi,
Some questions:
-Did you by any chance change the cluster.quorum-type option from the
default values?
-Is filename.shareKey supposed to be any empty file? Looks like the file
was fallocated with the keep-size option but never written to. (On the 2
data bricks, stat output shows Size =0, but non zero Blocks and yet a
'regular empty file').
-Do you have some sort of a
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello,
I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do?
For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread?
Thanks,
Mabi
?
??????? Original Message ???????
On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote:
> ??
>
> Hi Ravi,
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote:
> Hello,
>
> I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do?
>
> For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread?
Sorry Mabi,? I haven't had a chance to dig deeper into this. The
workaround of
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2009 Sep 29
1
Interesting function in a function problem....
#I have these data. Basically, I want to run a tapply to calculate the mean
and st. err. by factor.
#The problem is, I want to add a finite correction to the variance prior to
calculating the standard error.
#Now I know I could just do this in 3/4 steps by calculating the var,
applying the correction, then calculating the StErr.
#However, I am trying to learn about functions and how they work. So
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
I want to use --min-size to copy just large files (and their necessary
parent directories), but everything I've tried copies *all* the source
directories, and creates them empty on the destination even if they
don't have any big files in them. I only want the minimal directory
hierarchies that contain the big files. This doesn't work:
$ rm -rf /tmp/foo
$ rsync -ai --min-size
2013 Nov 12
1
How to replace NA's data with some value
Hi all,
I have a data set with missing value. I would like to estimate those
missing value by using normal ratio method.
Below is part of my data:
AS BL Serdang Jhr Phg Target station
0 0.0 12.8 0.0 23.7 0.0
6 0.0 81.7 0.2 0.0 NA
0 1.5 60.9 0.0 0.0 15.5
1 13.0 56.8 17.5 32.8 6.4
4 3.0 66.4
2017 Nov 16
4
[Bug 13147] New: inconsistent behaviour regaring vanished files information
https://bugzilla.samba.org/show_bug.cgi?id=13147
Bug ID: 13147
Summary: inconsistent behaviour regaring vanished files
information
Product: rsync
Version: 3.0.9
Hardware: x64
OS: Mac OS X
Status: NEW
Severity: minor
Priority: P5
Component: core
Assignee:
2017 Aug 11
0
Using automatic variable selection procedures with eblupFH
Hello,
I am currently working with EBLUP estimators (the eblupFH function in R) and I was wondering if there are any automatic variable selection procedures I can use. I already found the "regsubsets" and "step" functions, but those doesn't work with the eblupFH function.
Thank you in advance.
[[alternative HTML version deleted]]