similar to: GlusterFS 3.8.15 is available, likely the last 3.8 update

Displaying 20 results from an estimated 700 matches similar to: "GlusterFS 3.8.15 is available, likely the last 3.8 update"

2017 Jul 15
0
GlusterFS 3.8.14 is here, 3.8 even closer to End-Of-Life
[From http://blog.nixpanic.net/2017/07/glusterfs-3814-is-here.html and also on https://planet.gluster.org/] GlusterFS 3.8.14 is here, 3.8 even closer to End-Of-Life The 10th of the month has passed again, that means a 3.8.x update can't be far out. So, here it is, we're announcing the [9]availability of glusterfs-3.8.14. Note that this is one of the last updates in the 3.8
2017 Jun 29
0
GlusterFS 3.8.13 update available, and 3.8 nearing End-Of-Life
[Repost of a clickable blog to make it easier to read over email http://blog.nixpanic.net/2017/06/glusterfs-3813-update-available.html and also expected to arrive on https://planet.gluster.org/ soon. The release notes can also be found in the release-3.8 branch on GitHub https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.13.md] The Gluster releases follow a 3-month
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Mar 10
0
CentOS-announce Digest, Vol 145, Issue 5
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2017 Jul 11
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at gmail.com> wrote: > > > > You should first upgrade servers and then clients. New servers can > > understand old clients, but it is not easy for old servers to understand > new > > clients in case it started doing something new. > > But isn't that the reason op-version exists? So that regardless
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Well it was probably caused by running replica 2 and doing online upgrade. However I added brick, turned volume to replica 3 with arbiter and got very strange issue I will mail to this list in a moment... Thanks. -ps On Tue, Jul 11, 2017 at 1:55 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote: > > > On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at
2017 Aug 04
0
GlusterFS 3.8 Debian 8 apt repository broken
I managed to workaround this issue by addding "[arch=amd64]" to my apt source list for gluster like this: deb [arch=amd64] http://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.14/Debian/jessie/apt jessie main In case that can help any others with the same siutation (where they also have i386 arch enabled on the computer). > -------- Original Message -------- > Subject:
2017 Aug 04
1
GlusterFS 3.8 Debian 8 apt repository broken
What would the fix be exactly? The apt repos are built the same way they've been built for the last 3+ years and you're the first person to trip over whatever it is you're tripping over. And there have never been packages for i386 for Debian. ----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Gluster Users" <gluster-users
2017 Aug 04
2
GlusterFS 3.8 Debian 8 apt repository broken
Hello, I want to upgrade from 3.8.11 to 3.8.14 on my Debian 8 (jessie) servers but it looks like the official GlusterFS apt repository has a mistake as you can see here: Get:14 http://download.gluster.org jessie InRelease [2'083 B] Get:15 http://download.gluster.org jessie/main amd64 Packages [1'602 B] Fetched 23.7 kB in 2s (10.6 kB/s) W: Failed to fetch
2017 Aug 17
0
Glusterfs brick logs after upgrading from 3.6 to 3.8
I upgraded my glusterfs (2xreplia) to 3.8.14 from 3.6 yesterday Then it is producing lots of logs in /var/log/glusterfs/bricks like this: [2017-08-17 07:28:38.477730] W [dict.c:1223:dict_foreach_match] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach_match+0x5c) [0x7f2eae4988ac] -->/usr/local/lib/glusterfs/3.8.14/xlator/features/index.so(+0x6da0) [0x7f2ea2295da0]
2013 Dec 01
1
Adding Gluster support for Primary Storage in CloudStack
Hi all, I'd like to inform any CloudStack users that there are now patches [0] available (for CoudStack) that make it possible to use an existing Gluster environment as Primary Storage on CloudStack. The changes extend CloudStack so that libvirt will be used for mounting the Gluster Volume over the fuse-client. Some further details and screenshots are available on my blog [1]. If there
2019 Apr 02
0
CentOS-announce Digest, Vol 170, Issue 1
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2017 Aug 25
2
3.8 Upgrade to 3.10
Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this weekend. * debian 8 * 3 nodes * Replica 3 * Sharded * VM Hosting only The release notes strongly recommend upgrading to 3.10 * Is there any downside to staying on 3.8.15 for a while longer? * I didn't see anything I had to have in 3.10, but ongoing updates are always good :( This mildly concerned me:
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2017 Aug 25
0
3.8 Upgrade to 3.10
On 08/25/2017 09:17 AM, Lindsay Mathieson wrote: > Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this > weekend. > > * debian 8 > * 3 nodes > * Replica 3 > * Sharded > * VM Hosting only > > The release notes strongly recommend upgrading to 3.10 > > * Is there any downside to staying on 3.8.15 for a while longer? 3.8 will
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
I believe the peer rejected issue is something we recently identified and has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637 and is available in 3.12.6. I'd request you to upgrade to the latest version in 3.12 series. On Mon, Feb 19, 2018 at 12:27 PM, <rwecker at ssd.org> wrote: > Hi, > > I have a 3 node cluster (Found1, Found2, Found2) which i wanted
2018 Mar 20
0
CentOS-announce Digest, Vol 157, Issue 5
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2009 Jan 28
0
smp_tlb_shootdown bottleneck?
Hi. Sometimes I see much contention in smp_tlb_shootdown while running sysbench: sysbench --test=fileio --num-threads=8 --file-test-mode=rndrd --file-total-size=3G run kern.smp.cpus: 8 FreeBSD 7.1-R CPU: 0.8% user, 0.0% nice, 93.8% system, 0.0% interrupt, 5.4% idle Mem: 11M Active, 2873M Inact, 282M Wired, 8K Cache, 214M Buf, 765M Free Swap: 4096M Total, 4096M Free PID USERNAME PRI NICE
2017 Oct 17
1
Distribute rebalance issues
Nithya, Is there any way to increase the logging level of the brick? There is nothing obvious (to me) in the log (see below for the same time period as the latest rebalance failure). This is the only brick on that server that has disconnects like this. Steve [2017-10-17 02:22:13.453575] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-video-server: accepted client from