Displaying 20 results from an estimated 20000 matches similar to: "Gluster client disconnects"
2012 Jul 20
0
Gluster peers disconnecting
Dear Gluster,
I'm running Gluster 3.3 on a four host setup. (Two bricks per host.)
I'm attempting to use this as an rsnapshot backup system. Periodically,
the rsync's seem to fail, and I think this is due to underlying gluster
failures. I notice this in /var/log/messages:
# Jul 19 23:47:15 annex1 GlusterFS[26815]: [2012-07-19 23:47:15.424909]
C
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote:
> Well, it looks like I've stumped the list, so I did a bit of additional
> digging myself:
>
>
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
more data (445G vs 442G) than
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated
servers for 16 qemu/kvm/libvirt virtual machines using image files
stored in gluster and accessed via libgfapi. Eight of these disk images
are standalone, while the other eight are qcow2 images which all share a
single backing file.
For the most part, this is all working very well. However, one of the
gluster servers
2017 Nov 08
0
Gluster Summit BOF - Testing
Hi all,
We had a BoF about Upstream Testing and increasing coverage.
Discussion included:
- More docs on using the gluster-specific libraries.
- Templates, examples, and testcase scripts with common functionality as a jumping off point to create a new test script.
- Reduce the number of systems required by existing libraries (but scale as needed). e.g., two instead of eight.
- Providing
2017 Nov 08
0
Gluster Summit BOF - Testing
Hi all,
We had a BoF about Upstream Testing and increasing coverage.
Discussion included:
- More docs on using the gluster-specific libraries.
- Templates, examples, and testcase scripts with common functionality as a jumping off point to create a new test script.
- Reduce the number of systems required by existing libraries (but scale as needed). e.g., two instead of eight.
- Providing
2013 Dec 15
2
puppet-gluster from zero: hangout?
Hey james and JMW:
Can/Should we schedule a google hangout where james spins up a
puppet-gluster based gluster deployment on fedora from scratch? Would love
to see it in action (and possibly steal it for our own vagrant recipes).
To speed this along: Assuming James is in England here , correct me if im
wrong, but if so ~ Let me propose a date: Tuesday at 12 EST (thats 5 PM in
london - which i
2012 Jul 06
1
Gluster 3.3.0 installation on RHEL 5.1 problem
Hello,
I have done gluster3.2.2 set up on 2 nodes with RHEL 5.1 .Now , I am trying
to upgrade to gluster 3.3.0 .But , RPMs for gluster 3.3.0 are available for
RHEL 6 only .
I got RPMS from here-
http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/
Also, source installation is unsuccessful . Can you tell if there is a way
to install it on RHEL 5.1 or I should install RHEL 6 ?
Thanks in
2011 Oct 06
1
fuse mount disconnecting...
hi,
i am getting regular crashes which result in the mount being dropped:
n1:~ # ls /n/auto/gv1/
ls: cannot access /n/auto/gv1/: Transport endpoint is not connected
client side error log: http://pastebin.com/UgMaLq42
..i am also finding that the gluster severs also sometimes just drop out -
and i need to kill all the server side gluster processes and restart
glusterd. i'm not sure if
2013 Nov 01
1
Gluster "Cheat Sheet"
Greetings,
One of the best things I've seen at conferences this year has been a bookmark distributed by the RDO folks with most common and/or useful commands for OpenStack users.
Some people at Red Hat were wondering about doing the same for Gluster, and I thought it would be a great idea. Paul Cuzner, the author of the gluster-deploy project, took a first cut, pasted below. What do you
2011 Oct 18
2
gluster rebalance taking three months
Hi guys,
we have a rebalance running on eight bricks since July and this is
what the status looks like right now:
===Tue Oct 18 13:45:01 CST 2011 ====
rebalance step 1: layout fix in progress: fixed layout 223623
There are roughly 8T photos in the storage,so how long should this
rebalance take?
What does the number (in this case) 22362 represent?
Our gluster infomation:
Repository
2018 Feb 28
0
[Gluster-Maintainers] [Gluster-devel] Release 4.0: RC1 tagged
I found the following memory leak present in 3.13, 4.0 and master:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078
I will clone/port to 4.0 as soon as the patch is merged.
On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero <xavinux at gmail.com> wrote:
> Hi all,
>
> Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
> 3.10.0-693.17.1.el7.x86_64
>
> This
2013 Oct 10
2
A "Wizard" for Initial Gluster Configuration
Hi,
I'm writing a tool to simplify the initial configuration of a cluster, and it's now in a state that I find useful.
Obviously the code is on the forge and can be found at https://forge.gluster.org/gluster-deploy
If your interested in what it does, but don't have the time to look at the code I've uploaded a video to youtube
http://www.youtube.com/watch?v=UxyPLnlCdhA
Feedback
2017 Sep 01
2
[Gluster-devel] docs.gluster.org
Le mercredi 30 ao?t 2017 ? 12:11 +0530, Nigel Babu a ?crit?:
> Hello,
>
> To reduce confusion, we've setup docs.gluster.org pointing to
> gluster.readthedocs.org. Both URLs will continue to work for the
> forseeable
> future.
>
> Please update any references that you control to point to
> docs.gluster.org. At
> some point in the distant future, we will switch to
2017 Sep 01
0
[Gluster-devel] docs.gluster.org
Le vendredi 01 septembre 2017 ? 14:02 +0100, Michael Scherer a ?crit?:
> Le mercredi 30 ao?t 2017 ? 12:11 +0530, Nigel Babu a ?crit?:
> > Hello,
> >
> > To reduce confusion, we've setup docs.gluster.org pointing to
> > gluster.readthedocs.org. Both URLs will continue to work for the
> > forseeable
> > future.
> >
> > Please update any
2018 Feb 28
2
[Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged
Hi all,
Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
3.10.0-693.17.1.el7.x86_64
This package works ok
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
# yum install http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
# yum install glusterfs-server
# systemctl
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060
Please see the above bugs for full detail.
In summary, my issue was related to glusterd's pid handling of pid files
when is starts self-heal and bricks. The issues are:
a. brick pid file leaves stale pid and brick fails
2008 Oct 24
1
performance lower then expected
I've setup an eight node server stripe using gluster 1.4.0pre5 using the
stripe example from the wiki. Each of these eight nodes has a 100Mbit
ethernet card and a single hard disk.
I've connected them all together using a gigabit switch and I have a gigabit
workstation connected, with gluster mounted and running fine.
However, when i try to do a dd test to the disk "dd if=/dev/zero
2018 Jul 03
1
[CentOS-announce] Announcing the release of Gluster 4.1 on CentOS Linux 6 x86_64
Hello Niels,
On Wed, 27 Jun 2018 16:45:37 +0200 Niels de Vos <ndevos at redhat.com> wrote:
> I am happy to announce the General Availability of Gluster 4.1 for
> CentOS 6 on x86_64. These packages are following the upstream Gluster
> Community releases, and will receive monthly bugfix updates.
>
> Gluster 4.1 is a Long-Term-Maintenance release, and will receive
>
2018 Feb 27
2
[Gluster-Maintainers] Release 4.0: RC1 tagged
On 02/26/2018 02:03 PM, Shyam Ranganathan wrote:
> Hi,
>
> RC1 is tagged in the code, and the request for packaging the same is on
> its way.
>
> We should have packages as early as today, and request the community to
> test the same and return some feedback.
>
> We have about 3-4 days (till Thursday) for any pending fixes and the
> final release to happen, so