similar to: [GLUSTERFS] Peer Rejected

Displaying 20 results from an estimated 20000 matches similar to: "[GLUSTERFS] Peer Rejected"

2017 Aug 02
0
[GLUSTERFS] auth.allow fail
Hi, Since I upgrade to version 10.0.4 I have a problem with the auth.allow command. Indeed, if I allow an IP address to access a volume, other clients can also mount the volume without any consistent reason... [gluster v set vol_test auth.allow IP_ADDRESS1] => IP_ADDRESS2 can easily mount vol_test ... I am currently working on a dispersed volume 4 + 2 [gluster v info vol_test] auth.allow :
2017 Jun 29
1
AUTH-ALLOW / AUTH-REJECT
Hi, I want to manage access on dispersed volume. When I use gluster volume set test_volume auth.allow IP_ADDRESS it works but with HOSTNAME the filter doesn't apply... Any idea to solve my problem? glusterfs --version 3.7 Have a nice day, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Sep 28
0
Upgrading (online) GlusterFS-3.7.11 to 3.10 with Distributed-Disperse volume
I'm working on upgrading a set of our gluster machines from 3.7 to 3.10- at first I was going to follow the guide here: https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/ but it mentions: > * Online upgrade is only possible with replicated and distributed > replicate volumes > * Online upgrade is not supported for dispersed or distributed >
2017 Jul 27
0
AUTH-ALLOW / AUTH-REJECT
Can you describe the setup and the tests you did ? Is it consistently reproducible ? also you can file a bug at https://bugzilla.redhat.com/ On 06/29/2017 02:46 PM, Alexandre Blanca wrote: > Hi, > > I want to manage access on dispersed volume. > When I use *gluster volume set test_volume auth.allow/IP_ADDRESS > /*//it works but with /*HOSTNAME*/* ***the filter doesn't
2017 Dec 28
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, anyone had the same experience? Could you provide me some information about this error? It happens only on GlusterFS file system. Thank you, Mauro > Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Dear Users, > > I?m experiencing a random problem ( "file changed as we read it? error) during tar files
2017 Dec 20
2
"file changed as we read it" message during tar file creation on GlusterFS
Dear Users, I?m experiencing a random problem ( "file changed as we read it? error) during tar files creation on a distributed dispersed Gluster file system. The tar files seem to be created correctly, but I can see a lot of message similar to the following ones: tar: ./year1990/lffd1990050706p.nc.gz: file changed as we read it tar: ./year1990/lffd1990052106p.nc.gz: file changed as we read
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2019 Jun 10
2
Expected behavior of lld during LTO for global symbols (Attr Internal/Common)
Hi , I have an issue during LTO phase of llvm compiler which is as follows, File t3.c --------- #include <stdio.h> #include <stdlib.h> // A linked list node struct Node { int data; struct Node* next; struct Node* prev; }; struct Node* head; /* Given a reference (pointer to pointer) to the head of a list and an int, inserts a new node on the front of the list. */
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2017 Aug 06
0
State: Peer Rejected (Connected)
On 2017? 08? 06? 15:59, mabi wrote: > Hi, > > I have a 3 nodes replica (including arbiter) volume with GlusterFS > 3.8.11 and this night one of my nodes (node1) had an out of memory for > some unknown reason and as such the Linux OOM killer has killed the > glusterd and glusterfs process. I restarted the glusterd process but > now that node is in "Peer Rejected"
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2019 Jun 11
3
Expected behavior of lld during LTO for global symbols (Attr Internal/Common)
Looks like this is indeed related to r360841. In C, there are distinctions between declarations, definitions and tentative definitions. Global variables declared with "extern" are declarations. Global variables that don't have "extern" and have initializers are definitions. If global variables have neither "extern" nor initializers, they are called tentative
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon, Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1. So what I now
2019 Jun 20
2
Expected behavior of lld during LTO for global symbols (Attr Internal/Common)
Hi Teresa, Can you please let me know if there is any update on this issue. Thanks M Suresh From: Teresa Johnson <tejohnson at google.com> Sent: Tuesday, June 11, 2019 7:23 PM To: Rui Ueyama <ruiu at google.com> Cc: Mani, Suresh <Suresh.Mani at amd.com>; llvm-dev <llvm-dev at lists.llvm.org> Subject: Re: [llvm-dev] Expected behavior of lld during LTO for global symbols
2012 Aug 24
1
Peer Rejected (Connected) how to resolve
Deer experts, I'm using glusterfs 3.2.5 and I got a cluster of 6 peer. Now one of the peer says all the other 5 peers are in Peer Rejected (Connected) status and the other 5 peers say that peer is in Peer Rejected (Connected) status. And I noticed the if I create a new volume in the cluster the fault peer won't see the volume. Any one known how to recover the fault peer? Thank you very
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
On 2019-10-28 4:10 p.m., Jason Gunthorpe wrote: > From: Jason Gunthorpe <jgg at mellanox.com> > > Remove the interval tree in the driver and rely on the tree maintained by > the mmu_notifier for delivering mmu_notifier invalidation callbacks. > > For some reason amdgpu has a very complicated arrangement where it tries > to prevent duplicate entries in the interval_tree,
2019 Jun 21
2
Expected behavior of lld during LTO for global symbols (Attr Internal/Common)
Thanks for the info Teresa, Regards M Suresh From: Teresa Johnson <tejohnson at google.com> Sent: Thursday, June 20, 2019 7:15 PM To: Mani, Suresh <Suresh.Mani at amd.com> Cc: Rui Ueyama <ruiu at google.com>; llvm-dev <llvm-dev at lists.llvm.org> Subject: Re: [llvm-dev] Expected behavior of lld during LTO for global symbols (Attr Internal/Common) [CAUTION: External
2017 Aug 02
1
When can I start using a peer that was added to a large volume?
I added a peer to a 50GB replica volume and initial replication seems to go rather slow. It's about 50GB but has a lot of small files and a lot of files in the same folder. What would happen if I try to access a file on the new peer? Will it just fail? Will gluster fetch it sealessly from the replication partner? Or will the file just not be there? Thanks, -- Met vriendelijke groeten, Tom