Displaying 20 results from an estimated 30 matches for "fattening".
Did you mean:
flattening
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2009 Jan 23
1
Bezerk they will go!
On Thu, Jan 22, 2009 at 4:41 PM, Ralph Angenendt
<ra+centos at br-online.de<ra%2Bcentos at br-online.de>
> wrote:
> Michael St. Laurent wrote:
> > > > Is there a projected release date for CentOS-5.3?
> > > >
> > > >
> > > he....did....not....ask...this....
> > >
> > > search the forums..when it's done..unless you
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > It stopped being an outstanding
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the bug in the past
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and RHEV/KVM data, trying to figure out if it's
>related.
>
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are
2018 May 30
1
peer detach fails
All,
I added a third peer for a arbiter brick host to replica 2 cluster.
Then I realized I can't use it since it has no infiniband like the
other two hosts (infiniband and ethernet for clients). So I removed the
new arbiter bricks from all of the volumes. However, I can't detach the
peer as it keeps saying there are bricks it hosts. Nothing in volume
status or info shows that host to be
2003 Oct 02
1
pam_krb5 errors on OpenSSH3.6.1p2
...hu Oct 2 13:13:17 2003 ...
k9 sshd[25855]: pam_krb5: authenticate error: Preauthentication failed
(-1765328360)
This occurs whether I am using a Kerberos ticket to get in or simply
trying local password.
It seems to be a spurious (?) message, as I am always authenticated each
time, but it sure is fattening up my logs.
Anybody else see this problem, perchance?
--
*******************************************************
Quellyn L. Snead
UNIX Effort Team ( unixeffort at lanl.gov )
CCN-2 Enterprise Software Management Team
Los Alamos National Laboratory
(505) 667-4185 Schedule B
**********************...
2017 Sep 05
0
returning from a failed RAID
All,
I had a "bad timing" event where I lost 3 drives in a RAID6 array and
the structure of all of the LVM pools and nodes was lost.
This array was 1/2 of a redundant (replica 2) gluster config (will be
adding additional 3rd soon for split brain/redundancy with failure
issues).
The failed drives were replaced, the array rebuilt, all the thin_pools
and thin_volumes recreated, LUKS
2018 Jan 08
0
different names for bricks
I just noticed that gluster volume info foo and gluster volume heal
foo statistics use different indices for brick numbers. Info uses 1
based but heal statistics uses 0 based.
gluster volume info clifford Volume Name: cliffordType: Distributed-
ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a945aStatus:
StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type:
tcpBricks:Brick1:
2018 May 09
0
3.12, ganesha and storhaug
All,
I am upgrading the storage cluster from 3.8 to 3.10 or 3.12. I have
3.12 on the ovirt cluster. I would like to change the client connection
method to NFS/NFS-Ganesha as the FUSE method causes some issues with
heavy python users (mmap errors on file open for write).
I see that nfs-ganesha was dropped after 3.10 yet there is an updated
version in the 3.12 repo for CentOS 7 (which I am
2018 May 09
0
Some more questions
On Wed, 2018-05-09 at 18:26 +0000, Gandalf Corvotempesta wrote:
> Ok, some more question as I'm still planning our SDS (but I'm prone
> to use
> LizardFS, gluster is too inflexible)
>
> Let's assume a replica 3:
>
> 1) currently, is not possbile to add a single server and rebalance
> like any
> order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> You can change the replica count. Add a fourth server, add it's brick to
existing volume with gluster volume add-brick vol0 replica 4
newhost:/path/to/brick
This doesn't add space, but only a new replica, increasing the number of
copies
2018 May 09
0
Some more questions
correct. a new server will NOT add space in this manner. But the
original Q was about rebalancing after adding a 4th server. If you are
using distributed/replication, then yes, a new server with be adding a
portion of it's space to add more space to the cluster.
But in a purely replica mode, nope.
On Wed, 2018-05-09 at 19:25 +0000, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> correct. a new server will NOT add space in this manner. But the original
Q was about rebalancing after adding a 4th server. If you are using
distributed/replication, then yes, a new server with be adding a portion of
it's space to add more space to the cluster.
Wait, in a distribute-replicate,
2018 May 09
0
Some more questions
It all depends on how you are set up on the distribute. Think RAID 10
with 4 drives - each pair strips (distribute) and the pair of pairs
replicates.
On Wed, 2018-05-09 at 19:34 +0000, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > correct. a new server will NOT add space in this manner. But the
2018 Apr 02
0
Is the size of bricks limiting the size of files I can store?
On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>
> > On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
> >
> > > Hi
> > >
> > > I've found something that works so weird I'm certain I have
> > > missed how
> > > gluster is supposed to be
2018 Jan 26
0
Replacing a third data node with an arbiter one
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.co
> m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> > gluster> volume info thedude
> >
2017 Oct 05
2
Access from multiple hosts where users have different uid/gid
I have a setup with multiple hosts, each of them are administered
separately. So there are no unified uid/gid for the users.
When mounting a GlusterFS volume, a file owned by user1 on host1 might
become owned by user2 on host2.
I was looking into POSIX ACL or bindfs, but that won't help me much.
What did other people do with this kind of problem?
-------------- next part --------------
An