similar to: GlusterFS 3.8.13 update available, and 3.8 nearing End-Of-Life

Displaying 20 results from an estimated 1100 matches similar to: "GlusterFS 3.8.13 update available, and 3.8 nearing End-Of-Life"

2017 Jul 15
0
GlusterFS 3.8.14 is here, 3.8 even closer to End-Of-Life
[From http://blog.nixpanic.net/2017/07/glusterfs-3814-is-here.html and also on https://planet.gluster.org/] GlusterFS 3.8.14 is here, 3.8 even closer to End-Of-Life The 10th of the month has passed again, that means a 3.8.x update can't be far out. So, here it is, we're announcing the [9]availability of glusterfs-3.8.14. Note that this is one of the last updates in the 3.8
2017 Aug 21
0
GlusterFS 3.8.15 is available, likely the last 3.8 update
[from http://blog.nixpanic.net/2017/08/last-update-for-gluster-38.html and also available on https://planet.gluster.org/ ] GlusterFS 3.8.15 is available, likely the last 3.8 update The next Long-Term-Maintenance release for Gluster is around the corner. Once GlusterFS-3.12 is available, the oldest maintained version (3.8) will be retired and no maintenance updates are planned. With this last
2017 Jul 11
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at gmail.com> wrote: > > > > You should first upgrade servers and then clients. New servers can > > understand old clients, but it is not easy for old servers to understand > new > > clients in case it started doing something new. > > But isn't that the reason op-version exists? So that regardless
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Well it was probably caused by running replica 2 and doing online upgrade. However I added brick, turned volume to replica 3 with arbiter and got very strange issue I will mail to this list in a moment... Thanks. -ps On Tue, Jul 11, 2017 at 1:55 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote: > > > On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
> > You should first upgrade servers and then clients. New servers can > understand old clients, but it is not easy for old servers to understand new > clients in case it started doing something new. But isn't that the reason op-version exists? So that regardless of client/server mix, nobody tries to do "new" things above the current op-version? He is not changing mayor
2017 Jul 11
3
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > I upgraded from 3.8.12 to 3.8.13 without issues. > > Two replicated volumes with online update, upgraded clients first and > followed by servers upgrade, "stop glusterd, pkill gluster*, update > gluster*, start glusterd, monitor healing process and logs, after > completion proceed to
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Mar 10
0
CentOS-announce Digest, Vol 145, Issue 5
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Oct 06
0
Gluster 3.8.13 data corruption
Any chance of a backup you could do bit compare with? Sent from my Windows 10 phone From: Mahdi Adnan Sent: Friday, 6 October 2017 12:26 PM To: gluster-users at gluster.org Subject: [Gluster-users] Gluster 3.8.13 data corruption Hi, We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt. Today, we found an issue with one of the VMs template, after
2017 Oct 06
0
Gluster 3.8.13 data corruption
Hi, Thank you for your reply. Lindsay, Uunfortunately i do not have backup for this template. Krutika, The stat-prefetch is already disabled on the volume. -- Respectfully Mahdi A. Mahdi ________________________________ From: Krutika Dhananjay <kdhananj at redhat.com> Sent: Friday, October 6, 2017 7:39 AM To: Lindsay Mathieson Cc: Mahdi Adnan; gluster-users at gluster.org Subject: Re:
2007 Aug 09
0
win32-changenotify 0.5.0 nearing release
Hi all, I made some minor final changes to the win32-changenotify code (added accessors I forgot, updated the test suite, explicit type checking in the constructor, etc). The only somewhat major change that I made is that I now have the constructor yield/close if a block is given. Unless anyone objects to that, it''s going into 0.5.0. Otherwise, I''ll put 0.5.0 out tonight.
2004 Jun 16
2
1.0-test16 - nearing usable state
http://dovecot.org/test/ Here you go, maildir syncing problems finally fixed (I hope). Well, except there's this "new-dir-only syncing" optimization which I'm not really sure if it works as it should. The bug really was in index code as I was beginning to suspect. The good thing is that while trying to figure it out over the last several weeks I thought of many other potential
2017 Jul 10
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
I upgraded from 3.8.12 to 3.8.13 without issues. Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node" check gluster logs for more information. -- Respectfully Mahdi A. Mahdi
2017 Oct 09
1
Gluster 3.8.13 data corruption
OK. Is this problem unique to templates for a particular guest OS type? Or is this something you see for all guest OS? Also, can you get the output of `getfattr -d -m . -e hex <path>` for the following two "paths" from all of the bricks: path to the file representing the vm created off this template wrt the brick. It will usually be $BRICKPATH/xxxx....xx/images/$UUID where $UUID
2013 Dec 01
1
Adding Gluster support for Primary Storage in CloudStack
Hi all, I'd like to inform any CloudStack users that there are now patches [0] available (for CoudStack) that make it possible to use an existing Gluster environment as Primary Storage on CloudStack. The changes extend CloudStack so that libvirt will be used for mounting the Gluster Volume over the fuse-client. Some further details and screenshots are available on my blog [1]. If there
2017 Oct 06
2
Gluster 3.8.13 data corruption
Could you disable stat-prefetch on the volume and create another vm off that template and see if it works? -Krutika On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote: > Any chance of a backup you could do bit compare with? > > > > Sent from my Windows 10 phone > > > > *From: *Mahdi Adnan <mahdi.adnan at outlook.com>
2019 Apr 02
0
CentOS-announce Digest, Vol 170, Issue 1
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2017 Oct 05
2
Gluster 3.8.13 data corruption
Hi, We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt. Today, we found an issue with one of the VMs template, after deploying a VM from this template it will not boot, it stuck at mount the root partition. We've been using this templates for months now and we did not had any issues with it. Both oVirt and Gluster logs is not showing any errors or
2006 Mar 14
6
cFerret nearing completion
Hey folks, Some good news. I''ve finished cFerret and it''s ruby bindings to the point where I can run all of the unit tests. I still have to work out how I''m going to package and release it but it shouldn''t be long now. If you can''t wait you might like to try it from the subversion repository. It''ll probably only work on linux at the moment and