similar to: 3.8 Upgrade to 3.10

Displaying 20 results from an estimated 5000 matches similar to: "3.8 Upgrade to 3.10"

2017 Aug 25
0
3.8 Upgrade to 3.10
On 08/25/2017 09:17 AM, Lindsay Mathieson wrote: > Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this > weekend. > > * debian 8 > * 3 nodes > * Replica 3 > * Sharded > * VM Hosting only > > The release notes strongly recommend upgrading to 3.10 > > * Is there any downside to staying on 3.8.15 for a while longer? 3.8 will
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial drop in read/write perfomance env: - 3 node, replica 3 cluster - Private dedicated Network: 1Gx3, bond: balance-alb - was able to down the volume for the upgrade and reboot each node - Usage: VM Hosting (qemu) - Sharded Volume - sequential read performance in VM's has dropped from 700Mbps to 300mbs - Seq Write
2017 Sep 22
0
Performance drop from 3.8 to 3.10
Could you disable cluster.eager-lock and try again? -Krutika On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote: > Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial > drop in read/write perfomance > > env: > > - 3 node, replica 3 cluster > > - Private dedicated Network: 1Gx3, bond: balance-alb >
2017 Sep 22
1
Performance drop from 3.8 to 3.10
On 22/09/2017 1:21 PM, Krutika Dhananjay wrote: > Could you disable cluster.eager-lock and try again? Thanks, but didn't seem to make any difference. Can't test anymore at the moment as down a server that hung on reboot :( -- Lindsay Mathieson
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next
2017 Jun 18
3
Debian 3.8.12 packages have been updated?
I installed 3.8.12 a while back and the packages seem to have been updated since (2017-06-13), prompting me for updates. I haven't seen any release announcements or notes on this though. -- Lindsay Mathieson
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051 These fixes are very much part of the latest 3.10.2 release. Satheesaran within Red Hat also verified that they work and he's not seeing corruption issues anymore. I'd like to
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based) - gluster 3.8.12 - Replica 3 - VM Hosting Only - Sharded Storage Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on 2 nodes with no data loss - gluster for teh win! But its looking like I might have to replace the
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news. Is this planned to be published in next release? Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > Thanks for that update. Very happy to hear it ran fine without any issues. > :) > > Yeah so you can ignore those 'No such file or directory' errors. They > represent a transient state where DHT in the client process
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if we move this bug to CLOSED state and revert the rebalance-cli warning patch? -Krutika On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > Hello, > > > Yes, i forgot to upgrade the client as well. > > I did the upgrade and created a new volume,
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika, Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4? Is this metric for example based on the number of cores a GlusterFS server has? I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2017 Jun 19
0
Debian 3.8.12 packages have been updated?
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote: > I installed 3.8.12 a while back and the packages seem to have been > updated since (2017-06-13), prompting me for updates. > > > I haven't seen any release announcements or notes on this though. Bump - new versions are 3.8.12-2. Just curious as to what has been changed/fixed. -- Lindsay Mathieson
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote: > > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote: > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or
2018 Jan 12
3
Integration of GPU with glusterfs
On 12/01/2018 3:14 AM, Darrell Budic wrote: > It would also add physical resource requirements to future client > deploys, requiring more than 1U for the server (most likely), and I?m > not likely to want to do this if I?m trying to optimize for client > density, especially with the cost of GPUs today. Nvidia has banned their GPU's being used in Data Centers now to, I imagine
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock > or sanlock)? Yes. Gluster has it's own quorum, you can disable it but that's just a recipe for a disaster. > Also, on a two-node setup it is *guaranteed* for updates to one node to > put offline the whole volume? I think so, but I never took the chance so who knows. > On the other hand, a 3-way
2018 Jan 15
2
[Gluster-devel] Integration of GPU with glusterfs
It is disappointing to see the limitation being put by Nvidia on low cost GPU usage on data centers. https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ We thought of providing an option in glusterfs by which we can control if we want to use GPU or not. So, the concern of gluster eating out GPU's which could be used by others can be addressed. --- Ashish ----- Original
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote: > I thought you had removed vna as defective and then ADDED in vnh as > the replacement? > > Why is vna still there? Because I *can't* remove it. It died, was unable to be brought up. The gluster peer detach command only works with live servers - A severe problem IMHO. -- Lindsay Mathieson