Displaying 20 results from an estimated 300 matches similar to: "glusterd2 problem"
2018 Apr 06
0
glusterd2 problem
Hi Dmitry,
How many nodes does the cluster have ?
If the quorum is lost (majority of nodes are down), additional recovery
steps are necessary to bring it back up:
https://github.com/gluster/glusterd2/wiki/Recovery
On Wed, Apr 4, 2018 at 11:08 AM, Dmitry Melekhov <dm at belkam.com> wrote:
> Hello!
>
> Installed packages from SIG on centos7 ,
>
> at first start it works,
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users
01 | 02 mirrored --|
03 | 04 mirrored --| distributed
05 | 06 mirrored --|
1) Would this command work for that?
glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01
clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01
clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01
clustr-06:/mnt/data01
So the
2017 Aug 02
0
[Update] GD2 - what's been happening
Hello!
We're restarting regular GD2 updates. This is the first one, and I
expect to send these out every other week.
In the last month, we've identified a few core areas that we need to
focus on. With solutions in place for these, we believe we're ready to
start more deeper integration with glusterfs, that would be requiring
changes in the rest of the code base.
As of release
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2 for glusterfs-4.0 that we should change this to `replica 2
arbiter
2011 Feb 02
2
Gluster 3.1.2 and rpc-auth patch
Hi,
Fist of all thanks for all the work you put into gluster this product is fantastic.
In our setup, we have to have some kind of nfs authentication.
Not beeing able to set the rpc-auth option using the cli was a big draw-back for us.
Setting the option auth.allow only set the gluster auth.addr.allow option in the bricks themselves but did not do any good regarding nfs access.
Setting the
2018 Mar 08
1
Fwd: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0 released
Forwarding to the devel and users groups as well.
We have tagged 4.0.0 branch as GA, and are in the process of building
packages.
It would a good time to run final install/upgrade tests if you get a
chance on these packages (I am running off to do the same now).
Thanks,
Shyam
-------- Forwarded Message --------
Subject: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0
released
2010 Mar 04
1
[3.0.2] booster + unfsd failed
Hi list.
I have been testing with glusterfs-3.0.2.
glusterfs mount works well.
unfsd on glusterfs mount point works well too.
When using booster, unfsd realpath check failed.
But ls util works well.
I tried 3.0.0-git head source build but result was same.
My System is Ubuntu 9.10 and using unfsd source from official gluster
download site.
Any comment appreciated!!
- kpkim
root at
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export
2010 May 04
1
Glusterfs and Xen - mount option
Hi,
I've installed Gluster Storage Platform on two servers (mirror) and mounted
the volume on a client.
I had to mount it using "mount -t glusterfs
<MANAGEMENT_SERVER>:<VOLUMENAME-ON-WEBGUI>-tcp <MOUNTPOINT>" because I
didn't had a vol file and couldn't generate one with volgen because we don't
have shell access to the gluster server right?
How can
2018 Jan 15
5
glusterfs development library
I want to write a python script and visual interface to manage glusterfs, such as creating and deleting volumes.This can be easier to manage glusterfs?
But,now ,I execute the glusterfs command using python's subprocess.popen function?such as subprocess.Popen(GLUSTER_CMD, shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE)......
But this does not feel like a good program, Because it has
2018 Jan 15
0
glusterfs development library
On Mon, Jan 15, 2018 at 2:53 AM, ?? <mrchenx at 126.com> wrote:
> I want to write a python script and visual interface to manage glusterfs,
> such as creating and deleting volumes.This can be easier to manage
> glusterfs?
> But,now ,I execute the glusterfs command using python's
> subprocess.popen function?such as subprocess.Popen(GLUSTER_CMD,
>
2018 Jan 11
3
IMP: Release 4.0: CentOS 6 packages will not be made available
Gluster Users,
This is to inform you that from the 4.0 release onward, packages for
CentOS 6 will not be built by the gluster community. This also means
that the CentOS SIG will not receive updates for 4.0 gluster packages.
Gluster release 3.12 and its predecessors will receive CentOS 6 updates
till Release 4.3 of gluster (which is slated around Dec, 2018).
The decision is due to the following,
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as
> the replacement?
>
> Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem IMHO.
--
Lindsay Mathieson
2017 Jul 05
2
[New Release] GlusterD2 v4.0dev-7
After nearly 3 months, we have another preview release for GlusterD-2.0.
The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command
2017 Jul 05
0
[New Release] GlusterD2 v4.0dev-7
Il 5 lug 2017 11:31 AM, "Kaushal M" <kshlmster at gmail.com> ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2018 Mar 14
1
Announcing Gluster release 4.0.0 (Short Term Maintenance)
The Gluster community celebrates 13 years of development with this
latest release, Gluster 4.0. This release enables improved integration
with containers, an enhanced user experience, and a next-generation
management framework. The 4.0 release helps cloud-native app developers
choose Gluster as the default scale-out distributed file system.
We?re highlighting some of the announcements, major
2018 Jan 18
0
IMP: Release 4.0: CentOS 6 packages will not be made available
On 11/01/2018 18:32, Shyam Ranganathan wrote:
> Gluster Users,
>
> This is to inform you that from the 4.0 release onward, packages for
> CentOS 6 will not be built by the gluster community. This also means
> that the CentOS SIG will not receive updates for 4.0 gluster packages.
>
> Gluster release 3.12 and its predecessors will receive CentOS 6 updates
> till Release 4.3
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily
predictable from the Volume Info.
For example, in the below Volume info, it shows "Number of Bricks" in
the following format,
??? Number of Subvols x (Number of Data bricks + Number of Redundancy
bricks) = Total Bricks
Note: Sub volumes are predictable without storing it as separate info
since we do not have