similar to: GlusterFS performance

Displaying 20 results from an estimated 20000 matches similar to: "GlusterFS performance"

2012 Feb 26
1
"Structure needs cleaning" error
Hi, We have recently upgraded our gluster to 3.2.5 and have encountered the following error. Gluster seems somehow confused about one of the files it should be serving up, specifically /projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png If I go to that directory and simply do an ls *.png I get ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning (along with a listing
2011 May 06
2
Best practice to stop the Gluster CLIENT process?
Hi all! What's the best way to stop the CLIENT process for Gluster? We have dual systems, where the Gluster servers also act as clients, so both, glusterd and glusterfsd are running on the system. Stopping the server app. works via "/etc/init.d/glusterd stop" but the client is stopped how? I need to unmount the filesystem from the server in order to do a fsck on the ext4 volume;
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems. rdma is next; should be snap now.... [I must admit that this is my 1st foray into the land of IB, so some of the following may be obvious to a non-naive admin..] except that while I can create and start the volume with rdma as transport: ================================== root at pbs3:~ 622 $ gluster volume info glrdma
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse! I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration. What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry? Most of this will be reads
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such: The actual website files, php, html ,css and so on. Or on a dedicated non-glusterfs ext4 partition. However, the website access Videos and especially image files on a gluster mounted directory. The write performance for our backend gluster storage is not that important. Since it only comes into play when someone uploads a video or image. However, the files
2011 Oct 26
2
Some questions about theoretical gluster failures.
We're considering implementing gluster for a genomics cluster, and it seems to have some theoretical advantages that so far seem to have been borne out in some limited testing, mod some odd problems with an inability to delete dir trees. I'm about to test with the latest beta that was promised to clear up these bugs, but as I'm doing that, answers to these Qs would be
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week? Mike
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2012 Dec 17
2
Transport endpoint
Hi, I've got Gluster error: Transport endpoint not connected. It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error. Logs on the server side (on reverse time order): [2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2011 Jan 19
2
tuning gluster performance vs. nfs
I have been tweaking and researching for a while now and can't seem to get "good" performance out of Gluster. I'm using Gluster to replace an NFS server (c1.xlarge) that serves files to an array of web servers, all in EC2. In my tests Gluster is significantly slower than NFS on average. I'm using a distributed replicated volume on two (m1.large) bricks: Volume Name: ebs
2013 Jul 02
1
read-subvolume
Hi everyone I have installed 3.3.1-1 from the Debian repository you provide. I am using a simple 2 node cluster and running in replication mode. The connection between the nodes is limited to 100MB/sec (that's bits not bytes!). Usage will be mainly for read access and since there is always a local copy available [ exactly 2 replicas on exactly 2 machines ] I expect very fast read
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2013 Jun 20
1
Rev Your (RDMA) Engines for the RDMA GlusterFest
It's that time again ? we want to test the GlusterFS 3.4 beta before we unleash it on the world. Like our last test fest, we want you to put the latest GlusterFS beta through real-world usage scenarios that will show you how it compares to previous releases. Unlike the last time, we want to focus this round of testing on Infiniband and RDMA hardware. For a description of how to do this, see
2012 Mar 27
1
Targeting Bugs for GlusterFS 3.3 Beta 3
This is a list of bugs that Vijay has tagged for the 3.3.0 beta3 milestone: https://bugzilla.redhat.com/buglist.cgi?cmdtype=runnamed&namedcmd=Bugs%20targeted%20for%20GlusterFS%203.3.0%20beta3 Please take a look and see if any of them impact you. If so, feel free to add comments, and do please test fixes as they become available. You are, of course, welcome to submit patches as well,
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2013 Sep 13
1
glusterfs-3.4.1qa2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1qa2.tar.gz This release is made off jenkins-release-42 -- Gluster Build System
2012 Feb 14
4
Exorbitant cost to achieve redundancy??
I'm trying to justify a GlusterFS storage system for my technology development group and I want to get some clarification on something that I can't seem to figure out architecture wise... My storage system will be rather large. Significant fraction of a petabyte and will require scaling in size for at least one decade. from what I understand GlusterFS achieves redundancy through