Displaying 20 results from an estimated 9000 matches similar to: "Some questions about theoretical gluster failures."
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems.
rdma is next; should be snap now....
[I must admit that this is my 1st foray into the land of IB, so some
of the following may be obvious to a non-naive admin..]
except that while I can create and start the volume with rdma as
transport:
==================================
root at pbs3:~
622 $ gluster volume info glrdma
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
This is a continuation of my previous posts about improving write perf
when trapping millions of small writes to a gluster filesystem.
I was able to improve write perf by ~30x by running STDOUT thru gzip
to consolidate and reduce the output stream.
Today, another similar problem, having to do with yet another
bioinformatics program (which these days typically handle the 'short
reads' that
2013 Feb 27
4
GlusterFS performance
Hello!
I have GlusterFS installation with parameters:
- 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf)
- Distributed-replicated volume with 4 bricks and 2x4 redundancy formula.
- Replicated volume with 2 bricks and 2x2 formula.
I found some trouble: if I try to copy huge amount of files (94000 files,
3Gb size), this process takes terribly long time (from 20 to 40 minutes). I
2012 Aug 03
1
Gluster-users Digest, Vol 51, Issue 49
> Message: 4
> Date: Fri, 27 Jul 2012 15:29:41 -0700
> From: Harry Mangalam <hjmangalam at gmail.com>
> Subject: [Gluster-users] Change NFS parameters post-start
> To: gluster-users <gluster-users at gluster.org>
> Message-ID:
> <CAEib2OnKfENr8NhVwkvpsw21C5QJmzu_=C9j144p2Gkn7KP=LQ at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
2012 Jul 24
1
temp fix: Simultaneous reads and writes from specific apps to IPoIB volume seem to conflict and kill performance.
The problem described in the subject appears NOT to be the case. It's
not that simultaneous reads and writes dramatically decrease perf, but
that the type of /writes/ being done by this app (bedtools) kills
performance. If this was a self-writ app or an infrequently used one,
I wouldn't bother writing this up, but bedtools is a fairly popular
genomics app and since many installations use
2009 Mar 26
2
rsync questions
Hi all:
I got basic rsync working (not server mode). Basically it went to another server via ssh, backed up subdireactories and stored on the local server. But I am trying to use the feature of "exclude" and could not get it working rigjt.
I am trying to back up /export/home/* (all of users) on another machine but exclude a certain types of files. here is one of my tested exclude
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all,
I am having problems with painfully slow directory listings on a freshly
created replicated volume. The configuration is as follows: 2 nodes with
3 replicated drives each. The total volume capacity is 5.6T. We would
like to expand the storage capacity much more, but first we need to figure
this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the
2012 Sep 18
4
cannot create a new volume with a brick that used to be part of a deleted volume?
Greetings,
I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated
volume on two bricks. This morning I deleted it successfully:
########
[root at farm-ljf0 ~]# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Stopping volume gv0 has been successful
[root at farm-ljf0 ~]# gluster volume delete gv0
Deleting volume will erase
2012 Aug 23
1
Stale NFS file handle
Hi, I'm a bit curious of error messages of the type "remote operation
failed: Stale NFS file handle". All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?
Regards,
/jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Dec 10
4
Structure needs cleaning on some files
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
[client-rpc-fops.c:526:client3_3_stat_cbk]
2012 Jun 08
1
too many redirects at the gluster download page
It may be just me/chrome, but trying to dl the latest gluster results by
clicking on the Download button next to the Ant, leads not to a download
page but to the info page. It invites you to go back to the gluster.org
page from when you just came.
And when you click on the alternative 'Download' links (the button on
the upper right or the larger "Download GlusterFS" icon
2012 Nov 14
1
Howto find out volume topology
Hello,
I would like to find out the topology of an existing volume. For example,
if I have a distributed replicated volume, what bricks are the replication
partners?
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2012 Oct 03
1
Retraction: Protocol stacking: gluster over NFS
Hi All,
Well, it <http://goo.gl/hzxyw> was too good to be true. Under extreme,
extended IO on a 48core node, some part of the the NFS stack collapses and
leads to an IO lockup thru NFS. We've replicated it on 48core and 64 core
nodes, but don't know yet whether it acts similarly on lower-core-count nodes.
Tho I haven't had time to figure out exactly /how/ it collapses, I
2011 May 06
2
Best practice to stop the Gluster CLIENT process?
Hi all!
What's the best way to stop the CLIENT process for Gluster?
We have dual systems, where the Gluster servers also act as clients, so
both, glusterd and glusterfsd are running on the system.
Stopping the server app. works via "/etc/init.d/glusterd stop" but the
client is stopped how?
I need to unmount the filesystem from the server in order to do a fsck on
the ext4 volume;
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse!
I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration.
What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry?
Most of this will be reads
2012 Feb 26
1
"Structure needs cleaning" error
Hi,
We have recently upgraded our gluster to 3.2.5 and have
encountered the following error. Gluster seems somehow
confused about one of the files it should be serving up,
specifically
/projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png
If I go to that directory and simply do an ls *.png I get
ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning
(along with a listing
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it
to his cluster via automount. It seems to work initially but after some
time (days) he is now regularly seeing this warning:
"Too many levels of symbolic links"
$ df: `/share/gl': Too many levels of symbolic links
when he tries to traverse the mounted filesystems.
I've been using gluster with static mounts
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2009 Feb 13
4
uid/gid settings in rsyncd.conf not respected?
Hi All,
I must not understand the uid/gid line in rsyncd.conf. If someone
could briefly point out where I've gone wrong, I'd appreciate it.
I've created a special user to backup a server which has some users
who don't want all their files backed up, so I'm trying to address
their concerns by using the uid= and gid= lines in rsyncd.conf to
have the rsyncd run with
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a "volume
stripe" block in the configuration file in a client :
volume stripe
type cluster/stripe
option