similar to: Some more questions

Displaying 20 results from an estimated 2000 matches similar to: "Some more questions"

2018 May 09
0
Some more questions
On Wed, 2018-05-09 at 18:26 +0000, Gandalf Corvotempesta wrote: > Ok, some more question as I'm still planning our SDS (but I'm prone > to use > LizardFS, gluster is too inflexible) > > Let's assume a replica 3: > > 1) currently, is not possbile to add a single server and rebalance > like any > order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney at gmail.com> ha scritto: > You can change the replica count. Add a fourth server, add it's brick to existing volume with gluster volume add-brick vol0 replica 4 newhost:/path/to/brick This doesn't add space, but only a new replica, increasing the number of copies
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com> ha scritto: > It stopped being an outstanding issue at 3.12.7. I think it's now fixed. So, is not possible to extend and rebalance a working cluster with sharded data ? Can someone confirm this ? Maybe the ones that hit the bug in the past
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior in my lab with shards and RHEV/KVM data, trying to figure out if it's related. Thanks. On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote: > I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it > to settle. No problems. I am now running replica 4
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL on SSD, in gluster environment ? I've tried to compare both on another SDS (LizardFS) and I haven't seen any tangible performance improvement. Is gluster different ?
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com> ha scritto: > correct. a new server will NOT add space in this manner. But the original Q was about rebalancing after adding a 4th server. If you are using distributed/replication, then yes, a new server with be adding a portion of it's space to add more space to the cluster. Wait, in a distribute-replicate,
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it to settle. No problems. I am now running replica 4 (preparing to remove a brick and host to replica 3). On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote: > Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail. > com> > ha scritto: > > It stopped being an outstanding
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/ The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link. On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote: >What shard corruption bug? bugzilla url? I'm running into some odd >behavior >in my lab with shards and RHEV/KVM data, trying to figure out if it's >related. >
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello, I'm designing a 2-node, HA NAS that must support NFS. I had planned on using GlusterFS native NFS until I saw that it is being deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite well. Are
2018 May 30
1
peer detach fails
All, I added a third peer for a arbiter brick host to replica 2 cluster. Then I realized I can't use it since it has no infiniband like the other two hosts (infiniband and ethernet for clients). So I removed the new arbiter bricks from all of the volumes. However, I can't detach the peer as it keeps saying there are bricks it hosts. Nothing in volume status or info shows that host to be
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ? Probably not. If there is, it would probably favor
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online. That requires the client tool chain to understand that a write goes to all storage devices not just the active one. 3.10 is a long term support release. Upgrading to 3.12 or 4 is not a significant issue once a replacement
2018 May 09
0
Some more questions
correct. a new server will NOT add space in this manner. But the original Q was about rebalancing after adding a 4th server. If you are using distributed/replication, then yes, a new server with be adding a portion of it's space to add more space to the cluster. But in a purely replica mode, nope. On Wed, 2018-05-09 at 19:25 +0000, Gandalf Corvotempesta wrote: > Il giorno mer 9 mag 2018
2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e >
2017 Aug 01
3
Corrupt index files
On Mon, Jul 24, 2017 at 07:56:23PM +0300, Aki Tuomi wrote: > Well, dovecot does not really guarantee access concurrency safety if you access indexes using more than one instance of dovecot at the same time. Pardon my ignorance, but how does Dovecot handle when an IMAP client connects multiple times concurrently? Does it not launch multiple instances? > Nevertheless, did you try w/o
2017 Oct 05
2
Access from multiple hosts where users have different uid/gid
I have a setup with multiple hosts, each of them are administered separately. So there are no unified uid/gid for the users. When mounting a GlusterFS volume, a file owned by user1 on host1 might become owned by user2 on host2. I was looking into POSIX ACL or bindfs, but that won't help me much. What did other people do with this kind of problem? -------------- next part -------------- An
2015 Sep 21
3
New software based on libvirt
Hello, I'm introducing to you the decentralized cloud Cherrypop. Combining libvirt and LizardFS (as of now) it becomes a cloud completely without masters. Thus, any node is sufficient for the cloud to be up and therefore no wasted resources and no single point of failure. It's still pretty crude software but will work with some tinkering. Hope you try it and like it! For more
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2017 Jul 21
4
Corrupt index files
I am running Dovecot IMAP on Linux, on a LizardFS storage cluster with Maildir storage. This has worked well for most of the accounts for several months. However in the last couple of weeks we are seeing increasing errors regarding corrupted index files. Some of the accounts affected are unable to retrieve messages due to timeouts. It appeared the problems were due to the accounts being accessed