search for: jdarci

Displaying 8 results from an estimated 8 matches for "jdarci".

Did you mean: jdarcy
2013 Nov 12
2
Expanding legacy gluster volumes
Hi there, This is a hypothetical problem, not one that describes specific hardware at the moment. As we all know, gluster currently usually works best when each brick is the same size, and each host has the same number of bricks. Let's call this a "homogeneous" configuration. Suppose you buy the hardware to build such a pool. Two years go by, and you want to grow the pool. Changes
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi, We have a cluster with 130 compute nodes with an NAS-type central storage under gluster (3 bricks, ~50TB). When we run large number of ocean models we can run into bottlenecks with many jobs trying to write to our central storage. It was suggested to us that we could also used gluster to unite the disks on the compute nodes into a single "disk" in which files would be written
2011 Jun 20
1
Per-directory brick preference?
Hi, I operate a distributed replicated (1:2) setup that looks like this: server1:bigdisk,server1:smalldisk,server2:bigdisk,server2:smalldisk replica sets are bigdisk-bigdisk and smalldisk-smalldisk. This setup will be extended by another set of four bricks (same setup) within the next few days, and I could make those into another volume entirely, but I'd prefer not to, leaving me with more
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate volume to 3x2 and performing a full rebalance fuse clients log the following messages for every directory access: [2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk] 1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench [2012-03-08 10:53:56.953065] I
2012 Feb 26
1
"Structure needs cleaning" error
Hi, We have recently upgraded our gluster to 3.2.5 and have encountered the following error. Gluster seems somehow confused about one of the files it should be serving up, specifically /projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png If I go to that directory and simply do an ls *.png I get ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning (along with a listing
2012 Jun 16
5
Not real confident in 3.3
I do not mean to be argumentative, but I have to admit a little frustration with Gluster. I know an enormous emount of effort has gone into this product, and I just can't believe that with all the effort behind it and so many people using it, it could be so fragile. So here goes. Perhaps someone here can point to the error of my ways. I really want this to work because it would be ideal
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All- There are a lot of the following type of errors in my client and NFS logs following a recent volume expansion. [2012-02-16 22:59:42.504907] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol: atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501 34 - 1227133511 [2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk] 0-atmos-dht: mismatching
2012 Dec 27
8
how well will this work
Hi Folks, I find myself trying to expand a 2-node high-availability cluster from to a 4-node cluster. I'm running Xen virtualization, and currently using DRBD to mirror data, and pacemaker to failover cleanly. The thing is, I'm trying to add 2 nodes to the cluster, and DRBD doesn't scale. Also, as a function of rackspace limits, and the hardware at hand, I can't separate