Displaying 20 results from an estimated 30000 matches similar to: "Exorbitant cost to achieve redundancy??"
2009 Jul 18
1
GlusterFS & XenServer Baremetal
Hello,
What is for you the best GlusterFS scenario in using XenServer (i'm not
talking about Xen on a linux but XenServer baremetal) for a web farm
(Apache-Tomcat) ? I were thinking of using ZFS as the filesystem for the
different nodes.
The objectives/needs :
* A storage cluster with the capacity equal to at least 1 node(assuming all
nodes are the same).
* being able to lose/take down any
2018 Mar 22
2
[ovirt-users] GlusterFS performance with only one drive per host?
On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jaymef at gmail.com> wrote:
> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm
> considering storage options. I don't have a requirement for high amounts
> of storage, I have a little over 1TB to store but want some overhead so I'm
> thinking 2TB of usable space would be sufficient.
>
>
2010 Nov 27
1
GlusterFS replica question
Hi,
For small lab environment I want to use GlusterFS with only ONE node.
After some time I would like to add the second node as the redundant
node (replica).
Is it possible in GlusterFS 3.1 without downtime?
Cheers
PK
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2012 Dec 27
8
how well will this work
Hi Folks,
I find myself trying to expand a 2-node high-availability cluster from
to a 4-node cluster. I'm running Xen virtualization, and currently
using DRBD to mirror data, and pacemaker to failover cleanly.
The thing is, I'm trying to add 2 nodes to the cluster, and DRBD doesn't
scale. Also, as a function of rackspace limits, and the hardware at
hand, I can't separate
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody.
I have a problem setting up gluster failover funcionality. Based on
manual i setup ucarp which is working well ( tested with ping/ssh etc
)
But when i use virtual address for gluster volume mount and i turn off
one of nodes machine/gluster will freeze until node is back online.
My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In
gluster log i can see:
[2011-06-06
2017 Aug 02
2
Quotas not working after adding arbiter brick to replica 2
Mabi,
We have fixed a couple of issues in the quota list path.
Could you also please attach the quota.conf file (/var/lib/glusterd/vols/
patchy/quota.conf)
(Ideally, the first few bytes would be ascii characters followed by 17
bytes per directory on which quota limit is set)
Regards,
Sanoj
On Tue, Aug 1, 2017 at 1:36 PM, mabi <mabi at protonmail.ch> wrote:
> I also just noticed quite
2017 Aug 01
2
Quotas not working after adding arbiter brick to replica 2
Hello,
As you might have read in my previous post on the mailing list I have added an arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some healing issues and help of Ravi that could get fixed but now I just noticed that my quotas are all gone.
When I run the following command:
glusterfs volume quota myvolume list
There is no output...
In the /var/log/glusterfs/quotad.log I can see the
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through.
---------- Forwarded message ----------
From: Martin Toth <snowmailer at gmail.com>
Date: Thu, Sep 21, 2017 at 9:17 AM
Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
To: gluster-users at gluster.org
Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com
Hello all fellow GlusterFriends,
I would like you to comment /
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good.
Remember to back up Gluster config files before update:
/etc/glusterfs
/var/lib/glusterd
If you are *not* on the latest 3.7.x, you are unlikely to be able to go
back to it because PPA only keeps the latest version of each major branch,
so keep that in mind. With Ubuntu, every time you update, make sure to
download and keep a manual copy of the .Deb files. Otherwise you
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2 for glusterfs-4.0 that we should change this to `replica 2
arbiter
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2017 Aug 03
2
Quotas not working after adding arbiter brick to replica 2
I tried to re-create manually my quotas but not even that works now. Running the "limit-usage" command as showed below returns success:
$ sudo gluster volume quota myvolume limit-usage /userdirectory 50GB
volume quota : success
but when I list the quotas using "list" nothing appears.
What can I do to fix that issue with the quotas?
> -------- Original Message --------
>
2015 May 16
4
fault tolerance
Hi people,
Now I am using the version of gluster 3.6.2 and I want configure the system for fault tolerance. The point is that I want have two server in replication mode and if one server down the client do not note the fault. How I need import the system in the client for this purpose.
2018 Apr 25
3
Problem adding replicated bricks on FreeBSD
Hi Folks,
I'm trying to debug an issue that I've found while attempting to qualify
GlusterFS for potential distributed storage projects on the FreeBSD-11.1
server platform - using the existing package of GlusterFS v3.11.1_4
The main issue I've encountered is that I cannot add new bricks while
setting/increasing the replica count.
If I create a replicated volume "poc"
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely
supported in v3.8?
Kind regards,
Mitja
On 25/02/2018 13:55, Jim Kinney wrote:
> gluster volume add-brick volname replica 3 arbiter 1
> brickhost:brickpath/to/new/arbitervol
>
> Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a
> change in command will happen so it won't count the
2011 Apr 22
1
rebalancing after remove-brick
Hello,
I'm having trouble migrating data from 1 removed replica set to
another active one in a dist replicated volume.
My test scenario is the following:
- create set (A)
- create a bunch of files on it
- add another set (B)
- rebalance (works fine)
- remove-brick A
- rebalance (doesn't rebalance - ran on one brick in each set)
The doc seems to imply that it is possible to remove
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2018 Apr 26
2
cluster of 3 nodes and san
Hi list, I need a little help, I currently have a cluster with vmware
and 3 nodes, I have a storage (Dell powervault) connected by FC in
redundancy, and I'm thinking of migrating it to proxmox since the
maintenance costs are very expensive, but the Doubt is if I can use
glusterfs with a san connected by FC? , It is advisable? , I add
another data, that in another site I have another cluster