Displaying 17 results from an estimated 17 matches for "sicluster".
Did you mean:
scluster
2012 Feb 26
1
"Structure needs cleaning" error
Hi,
We have recently upgraded our gluster to 3.2.5 and have
encountered the following error. Gluster seems somehow
confused about one of the files it should be serving up,
specifically
/projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png
If I go to that directory and simply do an ls *.png I get
ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning
(along with a listing
2011 May 06
2
Best practice to stop the Gluster CLIENT process?
Hi all!
What's the best way to stop the CLIENT process for Gluster?
We have dual systems, where the Gluster servers also act as clients, so
both, glusterd and glusterfsd are running on the system.
Stopping the server app. works via "/etc/init.d/glusterd stop" but the
client is stopped how?
I need to unmount the filesystem from the server in order to do a fsck on
the ext4 volume;
2011 Jul 25
1
3.0.5 RDMA clients seem broken
...is (badly) broken there as well, but thats a whole other story.
n.b. we are updating them to 3.2.2 tomorrow.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse!
I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration.
What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry?
Most of this will be reads
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week?
Mike
2011 Jan 19
2
tuning gluster performance vs. nfs
I have been tweaking and researching for a while now and can't seem to
get "good" performance out of Gluster.
I'm using Gluster to replace an NFS server (c1.xlarge) that serves
files to an array of web servers, all in EC2. In my tests Gluster is
significantly slower than NFS on average. I'm using a distributed
replicated volume on two (m1.large) bricks:
Volume Name: ebs
2011 Oct 26
2
Some questions about theoretical gluster failures.
We're considering implementing gluster for a genomics cluster, and it
seems to have some theoretical advantages that so far seem to have
been borne out in some limited testing, mod some odd problems with an
inability to delete dir trees. I'm about to test with the latest beta
that was promised to clear up these bugs, but as I'm doing that,
answers to these Qs would be
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such:
The actual website files, php, html ,css and so on. Or on a dedicated
non-glusterfs ext4 partition.
However, the website access Videos and especially image files on a
gluster mounted directory.
The write performance for our backend gluster storage is not that
important. Since it only comes into play when someone uploads a video or
image.
However, the files
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems.
rdma is next; should be snap now....
[I must admit that this is my 1st foray into the land of IB, so some
of the following may be obvious to a non-naive admin..]
except that while I can create and start the volume with rdma as
transport:
==================================
root at pbs3:~
622 $ gluster volume info glrdma
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a "volume
stripe" block in the configuration file in a client :
volume stripe
type cluster/stripe
option
2012 Sep 25
2
GlusterFS performance
GlusterFS newbie (less than a week) here. Running GlusterFS 3.2.6 servers
on Dell PE2900 systems with four 3.16 GHz Xeon cores and 16 GB memory
under CentOS 5.8.
For this test, I have a distributed volume of one brick only, so no
replication. I have made performance measurements with both dd and
Bonnie++, and they confirm each other; here I report only the dd numbers
(using bs=1024k). File
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all,
I am having problems with painfully slow directory listings on a freshly
created replicated volume. The configuration is as follows: 2 nodes with
3 replicated drives each. The total volume capacity is 5.6T. We would
like to expand the storage capacity much more, but first we need to figure
this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong.
What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons
No RAID (individual
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody.
I have a problem setting up gluster failover funcionality. Based on
manual i setup ucarp which is working well ( tested with ping/ssh etc
)
But when i use virtual address for gluster volume mount and i turn off
one of nodes machine/gluster will freeze until node is back online.
My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In
gluster log i can see:
[2011-06-06
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel.
As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel.
As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other