Displaying 6 results from an estimated 6 matches for "gcnpublishing".
2012 Jun 16
5
Not real confident in 3.3
...to be an INCREDIBLY fragile system. Why would it lock
solid while copying a large file? Why no errors in the logs?
I am the only one seeing this kind of behavior?
sean
--
Sean Fulton
GCN Publishing, Inc.
Internet Design, Development and Consulting For Today's Media Companies
http://www.gcnpublishing.com
(203) 665-6211, x203
2012 Mar 14
1
NFS: server localhost error: fileid changed
I recently moved from the fuse client to NFS - Now I'm seeing a bunch of
this in syslog. Is this something to be concerned about, or is it
'normal' NFS behavior?
NFS: server localhost error: fileid changed
fsid 0:15: expected fileid 0xd88ba88a97875981, got 0x40e476ef5fdfbe9f
I also see a lot of 'stale file handle' in nfs.log, but the timestamps
don't correspond.
2012 Mar 14
2
QA builds for 3.2.6 and 3.3 beta3
Greetings,
There are 2 imminent releases coming soon to a download server near you:
1. GlusterFS 3.2.6 - a maintenance release that fixes some bugs.
2. GlusterFS 3.3 beta 3 - the next iteration of the exciting new hotness that will be 3.3
You can find both of these in the "QA builds" server:
http://bits.gluster.com/pub/gluster/glusterfs/
There are source tarballs and binary RPMs
2012 Mar 12
0
Data consistency with Gluster 3.2.5
...: option nfs3.web-pub.volume-id ac556d2e-e8a9-4857-bd17-cab603820fcb
68: subvolumes web-pub
69: end-volume
Any ideas or help would be greatly appreciated.
sean
--
Sean Fulton
GCN Publishing, Inc.
Internet Design, Development and Consulting For Today's Media Companies
http://www.gcnpublishing.com
(203) 665-6211, x203
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All,
For our project, we bought 8 new Supermicro servers. Each server is a
quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives.
To start out, we only populated 2 x 2TB enterprise drives in each
server and added all 8 peers with their total of 16 drives as bricks to
our gluster pool as distributed replicated (2). The replica worked as
follows:
1.1 -> 2.1
1.2
2012 Dec 27
8
how well will this work
Hi Folks,
I find myself trying to expand a 2-node high-availability cluster from
to a 4-node cluster. I'm running Xen virtualization, and currently
using DRBD to mirror data, and pacemaker to failover cleanly.
The thing is, I'm trying to add 2 nodes to the cluster, and DRBD doesn't
scale. Also, as a function of rackspace limits, and the hardware at
hand, I can't separate