Displaying 20 results from an estimated 1000 matches similar to: "GlusterFS 3.8.14 is here, 3.8 even closer to End-Of-Life"
2017 Jun 29
0
GlusterFS 3.8.13 update available, and 3.8 nearing End-Of-Life
[Repost of a clickable blog to make it easier to read over email
http://blog.nixpanic.net/2017/06/glusterfs-3813-update-available.html
and also expected to arrive on https://planet.gluster.org/ soon. The
release notes can also be found in the release-3.8 branch on GitHub
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.13.md]
The Gluster releases follow a 3-month
2017 Aug 21
0
GlusterFS 3.8.15 is available, likely the last 3.8 update
[from http://blog.nixpanic.net/2017/08/last-update-for-gluster-38.html
and also available on https://planet.gluster.org/ ]
GlusterFS 3.8.15 is available, likely the last 3.8 update
The next Long-Term-Maintenance release for Gluster is around the
corner. Once GlusterFS-3.12 is available, the oldest maintained version
(3.8) will be retired and no maintenance updates are planned. With this
last
2017 Jun 06
0
Glusterfs 3.10.3 has been tagged
Apologies for delay in sending mail about builds. We have packages
ready for all distributions, thanks to Kaleb and Niels.
As per [1] (mostly[2]):
* Packages for Fedora 24 and Fedora 27 are available at [3].
* Packages for Fedora 25 and Fedora 26 are built and queued in Fedora
bodhi for testing. Once pushed to testing they will be available in
the Updates-Testing repo. After a nominal testing
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from
2017 Aug 04
0
GlusterFS 3.8 Debian 8 apt repository broken
I managed to workaround this issue by addding "[arch=amd64]" to my apt source list for gluster like this:
deb [arch=amd64] http://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.14/Debian/jessie/apt jessie main
In case that can help any others with the same siutation (where they also have i386 arch enabled on the computer).
> -------- Original Message --------
> Subject:
2017 Aug 04
1
GlusterFS 3.8 Debian 8 apt repository broken
What would the fix be exactly?
The apt repos are built the same way they've been built for the last 3+ years and you're the first person to trip over whatever it is you're tripping over.
And there have never been packages for i386 for Debian.
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Gluster Users" <gluster-users
2017 Aug 04
2
GlusterFS 3.8 Debian 8 apt repository broken
Hello,
I want to upgrade from 3.8.11 to 3.8.14 on my Debian 8 (jessie) servers but it looks like the official GlusterFS apt repository has a mistake as you can see here:
Get:14 http://download.gluster.org jessie InRelease [2'083 B]
Get:15 http://download.gluster.org jessie/main amd64 Packages [1'602 B]
Fetched 23.7 kB in 2s (10.6 kB/s)
W: Failed to fetch
2017 Aug 17
0
Glusterfs brick logs after upgrading from 3.6 to 3.8
I upgraded my glusterfs (2xreplia) to 3.8.14 from 3.6 yesterday
Then it is producing lots of logs in /var/log/glusterfs/bricks like this:
[2017-08-17 07:28:38.477730] W [dict.c:1223:dict_foreach_match]
(-->/usr/local/lib/libglusterfs.so.0(dict_foreach_match+0x5c)
[0x7f2eae4988ac]
-->/usr/local/lib/glusterfs/3.8.14/xlator/features/index.so(+0x6da0)
[0x7f2ea2295da0]
2014 Jul 07
0
CentOS 7 Release - Zero Day Updates
The following SRPM packages were built and included as Zero Day Updates
as part of the CentOS-7.0.1406 Release in the updates directory.
To get all updates, use this command after installing CentOS 7:
yum upgrade
Updates:
NetworkManager-0.9.9.1-22.git20140326.4dba720.el7_0.src.rpm
https://access.redhat.com/errata/RHBA-2014:0726
NetworkManager-0.9.9.1-23.git20140326.4dba720.el7_0.src.rpm
2017 Jul 10
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
I upgraded from 3.8.12 to 3.8.13 without issues.
Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node"
check gluster logs for more information.
--
Respectfully
Mahdi A. Mahdi
2017 Jul 11
3
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> I upgraded from 3.8.12 to 3.8.13 without issues.
>
> Two replicated volumes with online update, upgraded clients first and
> followed by servers upgrade, "stop glusterd, pkill gluster*, update
> gluster*, start glusterd, monitor healing process and logs, after
> completion proceed to
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi,
After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
[2016-11-02 14:26:41.864075] I [MSGID:
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
>
> You should first upgrade servers and then clients. New servers can
> understand old clients, but it is not easy for old servers to understand new
> clients in case it started doing something new.
But isn't that the reason op-version exists? So that regardless of
client/server mix, nobody tries to do "new" things above the current
op-version?
He is not changing mayor
2007 Aug 22
1
Cisco firmwares 3.6.3 vs 3.8.6
Hi All,
A question for those with Cisco 7940/60 SIP phones. I used to load
POS3-06-03-00 Firmware to the cisco phones. A month or so ago, I ran
some tests and found that latest 3.8.6 firmware worked well, and solved
an issue or two on the phones.
I've a number of users who work outside of the LAN. Our phones use DNS
names to talk to A*k, so in theory, just enabling NAT makes the phone
2016 Nov 21
1
CTDB implementation lock file permission denied
Hi folks,
I know this topic was already discussed but unfortunately non of the
suggestion's helped.
My current setup, Glusterfs on ZFS, CTDB installed but running into
"Permission denied" error in regards to the lock file.
What did I tried/tested:
- Mounting the gluster volume with option --direct-io-mode=enable
- ping_pong -rw /gluster/lock/lock 3
2014 Jul 08
0
CentOS-announce Digest, Vol 113, Issue 3
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Well it was probably caused by running replica 2 and doing online
upgrade. However I added brick, turned volume to replica 3 with
arbiter and got very strange issue I will mail to this list in a
moment...
Thanks.
-ps
On Tue, Jul 11, 2017 at 1:55 PM, Pranith Kumar Karampuri
<pkarampu at redhat.com> wrote:
>
>
> On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at
2017 Jul 11
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> >
> > You should first upgrade servers and then clients. New servers can
> > understand old clients, but it is not easy for old servers to understand
> new
> > clients in case it started doing something new.
>
> But isn't that the reason op-version exists? So that regardless
2014 Apr 11
1
Possible Dahdi compile problem
I am having problems with system crashes in a dahdi/asterisk compile.? The is on a Beaglebone Black running Debian 3.8.13 kernel.
Dahdi is patched and asterisk is also custom so this must be compiled from source. I believe I have the proper header and source files for the running kernel. Both dahdi and asterisk compile and install. When run asterisk runs for anywhere from 10 minutes to a half
2006 Mar 28
1
Mongrel Web Server 0.3.12 -- Updated, Getting Closer
Everyone tracking the Mongrel 0.3.12 pre-release should dump their current
install and re-install:
$ gem uninstall mongrel
$ gem uninstall gem_plugin
$ gem install mongrel --source=http://mongrel.rubyforge.org/releases/
This release fixes a problem with specifying a directory to change to, and
fixes the incredibly broken DirHandler code from last night.
The big change people will probably