similar to: purge-empty-dirs and max-file-size confusion

Displaying 20 results from an estimated 1000 matches similar to: "purge-empty-dirs and max-file-size confusion"

2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi, Some questions: -Did you by any chance change the cluster.quorum-type option from the default values? -Is filename.shareKey supposed to be any empty file? Looks like the file was fallocated with the keep-size option but never written to. (On the 2 data bricks, stat output shows Size =0, but non zero Blocks and yet a 'regular empty file'). -Do you have some sort of a
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello, I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Thanks, Mabi ? ??????? Original Message ??????? On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hi Ravi,
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote: > Hello, > > I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? > > For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Sorry Mabi,? I haven't had a chance to dig deeper into this. The workaround of
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ?? ??????? Original Message ??????? On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ?? > > On 04/09/2018 04:36 PM, mabi wrote: > > >
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2002 Nov 28
4
cut and paste problem
Dear all, I am new to wine and try to use it to replace vmware for word/powerpoint applications. The first problem that I encounter is that I am unable to cut and paste under all the applications (notepad/word/powerpoint). It seems that the issue is related to the clipboard behavior because when I cut (under notepad) I get an error message stating: "Empty clipboard". Which is kind of
2003 Jul 17
1
2 GB Limit when writing to smbfs filesystems
I'm running RedHat 8.0 with samba-2.2.7-5.8.0 (installed from RedHat distribution) When I use cpio to write a backup (> 2GB) to a smbfs filesystem, I get the error: File size limit exceeded I get the same error when I linux copy (cp) a file (> 2GB) from a Linux ext3 filesystem to the smbfs filesystem. The smbfs filesystem is mounted from a Windows 2000 Professional workstation. After
2005 May 18
3
odd line in current CVS for firewall
>From a diff of my current shorewall firewall script with the new one from the CVS today : $ diff -w /usr/share/shorewall/firewall /usr/src/shorewall/s/firewall [...] 673c910 < for network in $networks; do --- > for networks in $networks; do I don''t think that "for networks in $networks" works well. -- -IAN! Ian! D. Allen Ottawa, Ontario,
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2013 Dec 04
1
Query on make
Greetings, I have created several scripts which needs to be packaged. I have done my groundwork on rpmbuild Let us say I have 4 directories with scripts in them dir1, dir2, dir3 and dir4 I want to create different packages which will contain the compiled code (err.. shc) pack1: dir1,dir2 pack2: dir1,dir3 pack3: dir1,dir3 I want the resulting rpm packages will be residing elsewhere (perhaps