search for: obby

Displaying 5 results from an estimated 5 matches for "obby".

Did you mean: hobby
2008 Jun 04
1
permsn incorrect when x==m (library: prob) (PR#11571)
...[,1] [,2]??? [1] 1 2??? [2] 2 1??? #Workaround: permsn(3,2) i.e. permsn(x+1,m) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 2 1 3 2 3 [2,] 2 1 3 1 3 2 #but then we have to remove the results with x==3 in them #hope that all makes sense Thankyou, Obby
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
On 05/15/2018 08:08 AM, Davide Obbi wrote: > Thanks Kaleb, > > any chance i can make the node working after the downgrade? > thanks Without knowing what doesn't work, I'll go out on a limb and guess that it's an op-version problem. Shut down your 3.13 nodes, change their op-version to one of the valid 3.12 op-versions (e.g. 31203) and restart. Then the 3.12 nodes should
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
Thanks Kaleb, any chance i can make the node working after the downgrade? thanks On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > > You can still get them from > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ > > (I don't know how much longer they'll be there. I suggest you copy them > if you think
2018 May 15
0
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
You can still get them from https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ (I don't know how much longer they'll be there. I suggest you copy them if you think you're going to need them in the future.) n 05/15/2018 04:58 AM, Davide Obbi wrote: > hi, > > i noticed that this repo for glusterfs 3.13 does not exists anymore at: > >
2018 May 15
2
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
hi, i noticed that this repo for glusterfs 3.13 does not exists anymore at: http://mirror.centos.org/centos/7/storage/x86_64/ i knew was not going to be long term supported however the downgrade to 3.12 breaks the server node i believe the issue is with: *[2018-05-15 08:54:39.981101] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management'