Displaying 20 results from an estimated 30000 matches similar to: "Usage Case: just not getting the performance I was hoping for"
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a "volume
stripe" block in the configuration file in a client :
volume stripe
type cluster/stripe
option
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong.
What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons
No RAID (individual
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi,
I've got this strange problem where a striped endpoint will crash when
I try to use cp to copy files off of it but not when I use rsync to
copy files off:
[user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/
cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py':
Software caused connection abort
cp: closing
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2012 Nov 06
2
I am very confused about strip Stripe what way it hold space?
I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard disk is 72Gx6:
each server mount info is
/dev/sda4 on /exp1 type xfs (rw)
/dev/sdb1 on /exp2 type xfs (rw)
/dev/sdc1 on /exp3 type xfs (rw)
/dev/sdd1 on /exp4 type xfs (rw)
/dev/sde1 on /exp5 type xfs (rw)
/dev/sdf1 on /exp6 type xfs (rw)
I create a gluster volume have 4 stripe
gluster volume create test-volume3 stripe 4
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2017 Jun 30
2
Very slow performance on Sharded GlusterFS
Hi,
I have an 2 nodes with 20 bricks in total (10+10).
First test:
2 Nodes with Distributed - Striped - Replicated (2 x 2)
10GbE Speed between nodes
"dd" performance: 400mb/s and higher
Downloading a large file from internet and directly to the gluster:
250-300mb/s
Now same test without Stripe but with sharding. This results are same when I
set shard size 4MB or
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika,
Sure, here is volume info:
root at sr-09-loc-50-14-18:/# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 30426017-59d5-4091-b6bc-279a905b704a
Status: Started
Snapshot Count: 0
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: sr-09-loc-50-14-18:/bricks/brick1
Brick2: sr-09-loc-50-14-18:/bricks/brick2
Brick3:
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi,
Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .????
Thanks & Regards,
Bobby Jacob
Senior Technical Systems Engineer | eGroup
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon,
I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu
I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2017 Jun 30
0
Very slow performance on Sharded GlusterFS
Could you please provide the volume-info output?
-Krutika
On Fri, Jun 30, 2017 at 4:23 PM, <gencer at gencgiyen.com> wrote:
> Hi,
>
>
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>
>
> First test:
>
>
>
> 2 Nodes with Distributed ? Striped ? Replicated (2 x 2)
>
> 10GbE Speed between nodes
>
>
>
> ?dd? performance:
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ?
Best Regards,Strahil Nikolov
On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote:
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users,
We are seeing 'error=No space left on device' issue and hoping someone might
could advise?
We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for
years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not
know if the upgrade is related to this new issue.
We are seeing a new issue 'error=No space left on device' error
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Hi,
I have an 2 nodes with 20 bricks in total (10+10).
First test:
2 Nodes with Distributed - Striped - Replicated (2 x 2)
10GbE Speed between nodes
"dd" performance: 400mb/s and higher
Downloading a large file from internet and directly to the gluster:
250-300mb/s
Now same test without Stripe but with sharding. This results are same when I
set shard size 4MB or
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2014 Jan 21
2
XFS : Taking the plunge
Hi All,
I have been trying out XFS given it is going to be the file system of
choice from upstream in el7. Starting with an Adaptec ASR71605 populated
with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4
x86_64 and has 64G of RAM.
This next part was not well researched as I had a colleague bothering me
late on Xmas Eve that he needed 14 TB immediately to move data to from an
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Hi Gencer,
I just checked the volume-profile attachments.
Things that seem really odd to me as far as the sharded volume is concerned:
1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10
seems to have witnessed all the IO. No other bricks witnessed any write
operations. This is unacceptable for a volume that has 8 other replica
sets. Why didn't the shards get distributed
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Just noticed that the way you have configured your brick order during
volume-create makes both replicas of every set reside on the same machine.
That apart, do you see any difference if you change shard-block-size to
512MB? Could you try that?
If it doesn't help, could you share the volume-profile output for both the
tests (separate)?
Here's what you do:
1. Start profile before starting
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
Have you be able to look out my profiles? Do you have any clue, idea or suggestion?
Thanks,
-Gencer
From: Krutika Dhananjay [mailto:kdhananj at redhat.com]
Sent: Friday, June 30, 2017 3:50 PM
To: gencer at gencgiyen.com
Cc: gluster-user <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Very slow performance on Sharded GlusterFS
Just noticed that the