Displaying 20 results from an estimated 1500 matches similar to: "Glusterfs performance with large directories"
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> I will try to explain how you can end up in split-brain even with cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users,
Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
more data (445G vs 442G) than
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated
servers for 16 qemu/kvm/libvirt virtual machines using image files
stored in gluster and accessed via libgfapi. Eight of these disk images
are standalone, while the other eight are qcow2 images which all share a
single backing file.
For the most part, this is all working very well. However, one of the
gluster servers
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and
distribute. What methoid would be better:
server sdb1
xen0 brick0
xen1 mirror0
xen2 brick1
xen3 mirror1
replicate block0 - brick0 mirror0
replicate block1 - brick1 mirror1
distribute unify - block0 block1
or
server sdb1 sdb2
xen0 brick0 mirror3
xen1 brick1 mirror0
xen2 brick2 mirror1
xen3 brick3 mirror2
replicate block0 -
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick
I don't think there is option to "reconnect brick back"
what I did many times - delete .gluster and reset attr on the folder,
connect the brick and then update those attr. with stat
commands example here
http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
Vlad
On Sun, Feb 25, 2018
2017 Aug 09
1
Gluster performance with VM's
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1:
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this.
Remove attrs from the brick and delete the .glusterfs folder. Data stays
in place. Add the brick to the volume.
Since most of the data is the same as on the actual volume it does not
need to be synced, and the heal operation finishes much faster.
Do I have this right?
Kind regards,
Mitja
On 25/02/2018 17:02, Vlad Kopylov wrote:
> .gluster and attr already in
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0 N/A N/A N
N/A
Brick gl0:/mnt/brick1/gm0 N/A
2009 May 19
1
nufa and missing files
We're using gluster-2.0.1 and a nufa volume comprised of thirteen
subvolumes across thirteen hosts.
We've found today that there are some files in the local filesystem
associated with the subvolume from one of the hosts that are not
being seen in the nufa volume on any gluster client.
I don't know how or when this happened, but now we have to do some
work to get this gluster volume
2012 Nov 27
1
Performance after failover
Hey, all.
I'm currently trying out GlusterFS 3.3.
I've got two servers and four clients, all on separate boxes.
I've got a Distributed-Replicated volume with 4 bricks, two from each
server,
and I'm using the FUSE client.
I was trying out failover, currently testing for reads.
I was reading a big file, using iftop to see which server was actually
being read from.
I put up an
2009 Jul 29
2
Xen - Backend or Frontend or Both?
I have 6 boxes with a client config (see below) across 6 boxes. I am using
distribute across 3 replicate pairs. Since I am running xen I need to
disable-direct-io and that slows things down quite a bit. My thought was
to move the replicate / distribute to the backend server config so that
self heal can happen on faster backend rather then frontend client with
disable-direct-io.
Does this
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2003 Dec 02
8
Vector Assignments
Hi,
I have simple R question.
I have a vector x that contains real numbers. I would like to create
another vector col that is the same length of x such that:
if x[i] < 250 then col[i] = "red"
else if x[i] < 500 then col[i] = "blue"
else if x[i] < 750 then col[i] = "green"
else col[i] = "black" for all i
I am convinced that there is probably a
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
>
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all,
I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following:
All machines are running CentOS 6.4 and using
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote:
> Well, it looks like I've stumped the list, so I did a bit of additional
> digging myself:
>
>