search for: contry

Displaying 20 results from an estimated 43 matches for "contry".

Did you mean: country
2005 Dec 15
2
survexp ratetables for european contries?
Dear All, Does someone have, or know of survexp ratetables for european contries, especially Austria and Germany? I know only about slopop in the package relsurv. Thanks in advance Heinz T??chler
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi, Some questions: -Did you by any chance change the cluster.quorum-type option from the default values? -Is filename.shareKey supposed to be any empty file? Looks like the file was fallocated with the keep-size option but never written to. (On the 2 data bricks, stat output shows Size =0, but non zero Blocks and yet a 'regular empty file'). -Do you have some sort of a
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello, I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Thanks, Mabi ? ??????? Original Message ??????? On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hi Ravi,
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote: > Hello, > > I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? > > For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Sorry Mabi,? I haven't had a chance to dig deeper into this. The workaround of
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2004 Oct 03
1
"#" sending
...do it with cisco ata 186 connected phones or X-lite softphones... I tried with login agents, zapbarge, authenticate , etc... Thank you in advance... -- ------------------------------------------------------------------------ Javier Rubio Xinet Solutions Av. Paseo de las Americas 2403-G, Col. Contry La Silla CP. 65137 Tel. 5x8 Directo (81) 81.03.00.92 - Tel. 7x24 (81) 81.28.42.57 jarubio@xinet.com.mx www.xinet.com.mx -------------- next part -------------- Skipped content of type multipart/related
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ?? ??????? Original Message ??????? On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ?? > > On 04/09/2018 04:36 PM, mabi wrote: > > >
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2017 Oct 26
0
not healing one file
Hi Richard, Thanks for the informations. As you said there is gfid mismatch for the file. On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different. This is not considered as split-brain because we have two good copies here. Gluster 3.10 does not have a method to resolve this situation other than the manual intervention [1]. Basically what you need to do is remove the
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi, I have the following volume: Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: virt3:/data/virt_images/brick Brick2: virt2:/data/virt_images/brick Brick3: printserver:/data/virt_images/brick (arbiter) Options Reconfigured: features.quota-deem-statfs:
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi, after 2 years running GlusterFS without bigger problems we're facing some strange errors lately. After updating to 3.12.7 some user reported at least 4 broken directories with some invisible files. The files are at the bricks and don't start with a dot, but aren't visible in "ls". Clients still can interact with them by using the explicit path. More information:
2004 Oct 05
2
Re: RES: Working E1 MFC/R2 M?xico !!! (Steve Underwood)
...e really the same as the > examples only thing i did was change to mx the variant. How ever the > most importat thing is for you to get the last file that steve > underwood has in his ftp site. As i say before the only thing you have > to change in the config is the variant for your contry thats br. > > Keep us post in your progress so we can know that R2 is working in > other countrys! > > On 05/10/2004, at 6:45 AM, Miguel wrote: > >> Miguel, >> >> I'm in Brazil and I'll try it with Embratel (a telmex company), may >> you send >...
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html It should be pretty much the same for replica 3, you change the xattrs with something like: # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a When I try to decide which