Displaying 8 results from an estimated 8 matches for "jdarcy".
Did you mean:
darcy
2013 Nov 12
2
Expanding legacy gluster volumes
...is
unavoidable.
Is there a general case solution for this problem? Is something planned
to deal with this problem? I can only think of a few specific corner
case solutions.
Another problem that comes to mind is ensuring that the older slower
servers don't act as bottlenecks to the whole pool. jdarcy had mentioned
that gluster might gain some notion of tiering, to support things like
ssd's in one part of the volume, and slow drives at the other end. Maybe
this sort of architecture can be used to solve the same problems.
Thoughts and discussion welcome.
Cheers,
James
-------------- next p...
2012 Nov 14
3
Using local writes with gluster for temporary storage
...following links, but would be interested
in any more pointers you may have. Thanks.
http://thr3ads.net/gluster-users/2012/06/1941337-how-to-enable-nufa-in-3.2.6
http://blog.aeste.my/2012/05/15/glusterfs-3-2-updates/
http://www.gluster.org/2012/05/back-door-async-replication/
https://github.com/jdarcy/bypass
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: phaley at mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mi...
2011 Jun 20
1
Per-directory brick preference?
Hi,
I operate a distributed replicated (1:2) setup that looks like this:
server1:bigdisk,server1:smalldisk,server2:bigdisk,server2:smalldisk
replica sets are bigdisk-bigdisk and smalldisk-smalldisk.
This setup will be extended by another set of four bricks (same setup)
within the next few days, and I could make those into another volume
entirely, but I'd prefer not to, leaving me with more
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi
I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate
volume to 3x2 and performing a full rebalance fuse clients log the
following messages for every directory access:
[2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.953065] I
2012 Feb 26
1
"Structure needs cleaning" error
Hi,
We have recently upgraded our gluster to 3.2.5 and have
encountered the following error. Gluster seems somehow
confused about one of the files it should be serving up,
specifically
/projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png
If I go to that directory and simply do an ls *.png I get
ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning
(along with a listing
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little
frustration with Gluster. I know an enormous emount of effort has gone
into this product, and I just can't believe that with all the effort
behind it and so many people using it, it could be so fragile.
So here goes. Perhaps someone here can point to the error of my ways. I
really want this to work because it would be ideal
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2012 Dec 27
8
how well will this work
Hi Folks,
I find myself trying to expand a 2-node high-availability cluster from
to a 4-node cluster. I'm running Xen virtualization, and currently
using DRBD to mirror data, and pacemaker to failover cleanly.
The thing is, I'm trying to add 2 nodes to the cluster, and DRBD doesn't
scale. Also, as a function of rackspace limits, and the hardware at
hand, I can't separate