Displaying 20 results from an estimated 10000 matches similar to: "[GLUSTERFS] auth.allow fail"
2017 Jun 29
1
AUTH-ALLOW / AUTH-REJECT
Hi,
I want to manage access on dispersed volume. When I use gluster volume set test_volume auth.allow IP_ADDRESS it works but with HOSTNAME the filter doesn't apply...
Any idea to solve my problem?
glusterfs --version
3.7
Have a nice day,
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Aug 02
0
[GLUSTERFS] Peer Rejected
Hi, I am currently working on version 10.0.4.
I would like to add a node in my current peer probe (composed of 6 nodes in dispersed volume 4+2).
When I perform [gluster peer probe NEW_NODE] I got a peer probe rejected.
I have followed the recommendation of the documentation "Resolving Peer Rejected State" but the problem is still there...
Someone can help me?
Regards,
A.
2017 Jul 27
0
AUTH-ALLOW / AUTH-REJECT
Can you describe the setup and the tests you did ? Is it consistently
reproducible ?
also you can file a bug at https://bugzilla.redhat.com/
On 06/29/2017 02:46 PM, Alexandre Blanca wrote:
> Hi,
>
> I want to manage access on dispersed volume.
> When I use *gluster volume set test_volume auth.allow/IP_ADDRESS
> /*//it works but with /*HOSTNAME*/* ***the filter doesn't
2017 Sep 28
0
Upgrading (online) GlusterFS-3.7.11 to 3.10 with Distributed-Disperse volume
I'm working on upgrading a set of our gluster machines from 3.7 to 3.10-
at first I was going to follow the guide here:
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
but it mentions:
> * Online upgrade is only possible with replicated and distributed
> replicate volumes
> * Online upgrade is not supported for dispersed or distributed
>
2017 Dec 28
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi All,
anyone had the same experience?
Could you provide me some information about this error?
It happens only on GlusterFS file system.
Thank you,
Mauro
> Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto:
>
>
> Dear Users,
>
> I?m experiencing a random problem ( "file changed as we read it? error) during tar files
2017 Dec 20
2
"file changed as we read it" message during tar file creation on GlusterFS
Dear Users,
I?m experiencing a random problem ( "file changed as we read it? error) during tar files creation on a distributed dispersed Gluster file system.
The tar files seem to be created correctly, but I can see a lot of message similar to the following ones:
tar: ./year1990/lffd1990050706p.nc.gz: file changed as we read it
tar: ./year1990/lffd1990052106p.nc.gz: file changed as we read
2017 Sep 20
2
hostname
Hi,
how to change the host name of gluster servers? if I modify the hostname1 in /etc/lib/glusterd/peers/uuid, the change is not save...
gluster pool list return ipserver and not new hostname...
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170920/c5b95a89/attachment.html>
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya,
thank you very much for your support and sorry for the late.
Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version:
gluster volume info
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (4 + 2) = 36
Transport-type: tcp
Bricks:
2018 Apr 03
1
Tune and optimize dispersed cluster
Hi all,
I have setup a dispersed cluster (2+1), version 3.12.
The way our users run I guessed that we would get the penalties
with dispersed cluster and I was right....
A calculation that usually takes about 48 hours (on a replicaited cluster),
now took about 60 hours.
There is alot of "small" reads/writes going on in these programs.
Is there a way to tune, optimize a dispersed cluster
2010 Dec 15
0
Problems with the borders (High difficulty)
Dear r-help members,
Could any of you help me with this model, please?
This model gives error when some value touch whatever border and I do not know how to correct it. The 80% of the seeds produced by a plant will fell into the parent cell, the 15% in the first ring according to the king movement (in chess), and a 5% in the second ring defined by the queen2 matrix. Someone said me the functions
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor
difference in file time stamps in the backend bricks of the same sub
volume (for a given file) and during the course of tar, the timestamp
can be served from different bricks causing it to complain . The ctime
xlator[1] feature once ready should fix this issue by storing time
stamps as xattrs on the bricks. i.e. all bricks
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi,
thank you very much for your support and explanation.
If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right?
Thank you again,
Mauro
> Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto:
>
> I think it is safe to ignore it. The problem exists due to the
2018 Apr 03
1
Dispersed cluster tune, optimize
Hi all,
I have setup a dispersed cluster (2+1), verision 3.12.
I guessed that we going to get punished by small read/writes...
and I was right.
A calculation that usually takes 48 hours
took about 60 hours and there are many small read/writes
to intermediate files that at the end get summed up.
Is there a way to tune, optimize a dispersed cluster to work
better with small read/writes?
Many
2018 Jan 12
1
Reading over than the file size on dispersed volume
Hi All,
I'm using gluster as dispersed volume and I send to ask for very serious
thing.
I have 3 servers and there are 9 bricks.
My volume is like below.
------------------------------------------------------
Volume Name: TEST_VOL
Type: Disperse
Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (6 + 3) = 9
Transport-type: tcp
Bricks:
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro,
What version of Gluster are you running and what is your volume
configuration?
IIRC, this was seen because of mismatches in the ctime returned to the
client. I don't think there were issues with the files but I will leave it
to Ravi and Raghavendra to comment.
Regards,
Nithya
On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote:
>
> Hi All,
2004 Feb 02
1
glm.poisson.disp versus glm.nb
Dear list,
This is a question about overdispersion and the ML estimates of the
parameters returned by the glm.poisson.disp (L. Scrucca) and glm.nb
(Venables and Ripley) functions. Both appear to assume a negative binomial
distribution for the response variable.
Paul and Banerjee (1998) developed C(alpha) tests for "interaction and main
effects, in an unbalanced two-way layout of counts
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
If you add bricks to existing volume one host could be down in each
three host group, If you recreate the volume with one brick on each
host, then two random hosts can be tolerated.
Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend
volume. If any two servers in each group goes down you loose data. If
you chose random two host the probability you loose data will be %20
in this
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Hi Mauro Tridici,
>From the information provided it appears like you have placed 2 bricks of a
subvolume on one host. Please confirm.
The number of hosts that could go down without losing access to data can be
derived based on the brick configuration/distribution. Please let us know
the brick distribution plan.
Regards,
Sunil kumar Acharya
Senior Software Engineer
Red Hat
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All,
any news about this issue?
Can I ignore this kind of error message or I have to do something to correct it?
Thank you in advance and sorry for my insistence.
Regards,
Mauro
> Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto:
>
>
> Hi Nithya,
>
> thank you very much for your support and sorry for the late.
> Below
2018 Mar 02
0
geo-replication
Hi Kotresh,
I am expecting my hardware to show up next week.
My plan is to run gluster version 3.12 on centos 7.
Has the issue been fixed in version 3.12?
Thanks a lot for your help!
/Marcus
On Fri, Mar 02, 2018 at 05:12:13PM +0530, Kotresh Hiremath Ravishankar wrote:
> Hi Marcus,
>
> There are no issues with geo-rep and disperse volumes. It works with
> disperse volume
> being