similar to: Gluster 3.2.0 and ucarp not working

Displaying 20 results from an estimated 2000 matches similar to: "Gluster 3.2.0 and ucarp not working"

2011 Sep 15
1
Gluster 3.2 configurations + translators
Hello, i'm little confused about gluster configuration interface. I did start with gluster 3.2 and i did all configurations using gluster cli command. Now when i was looking into way how to tune performance i find out in documentation on many places some pieces of text configuration files, but usually there is a warning that it is old and should be not used. Right now im solving how to turn
2011 Jul 20
1
Top Reset
Hello, is there any way how to reset volume TOP statistics ? thanks Matus
2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems to be some sort of fundamental disagreement between ext3 and 3ware's hardware RAID5 mode that trashes write performance. As a representative example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode (256KB stripe size) with a software RAID0 stripe on top (also 256KB chunks). bonnie++ results look
2009 Nov 18
2
simple NFSv4 setup
I'm trying to setup a simple NFSv4 mount between two x86_64 hosts. On the server, I have this in /etc/exports: /export $CLIENT(ro,fsid=0) /export/qb3 $CLIENT(rw,nohide) ON $CLIENT, I mount via: mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3 However: $ touch /usr/local/sge62/qb3/foo touch: cannot touch `/usr/local/sge62/qb3/foo': Read-only file system I'd really
2006 Nov 09
2
How to create a huge file system - 3-4TB?
We have a server with about 6x750Gb SATA drives setup on a hardware RAID controller. We created hardware RAID 5 on these 6x750GB HDDs. The effective size after RAID 5 implementation is 3.4TB. This server we want to use it as a data backup server. Here is the problem we are stuck with, when we use fdisk -l, we can see the drive specs and its size as 3.4TB. But when we want to create two different
2008 Jun 22
8
3ware 9650 issues
I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it, running as a single RAID6 w/ a hot spare. These issues boil down to the card periodically throwing errors like the following: sd 1:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card. Usually when this
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual 3ware 7500-8 based systems to md, I decided I'd like to go with RAID6 (since md is less tolerant of marginal drives than is 3ware). I did some benchmarking and was getting decent speeds with a 128KiB chunksize. So the next step was failure testing. First, I fired off memtest.sh as found at
2005 Dec 27
1
amd64 benchmarks
Has anyone here benchmarked 64-bit 4.2 against a dual core opteron (or Athlon 64x2) vs a pair of physical single-core opterons? It's that time of year again....ordering new workstations. 8-) Cheers,
2005 Nov 22
3
server exercising, stressing, and/or testing
greetings would someone please point me to an excellent server exercising, stressing, and/or testing program that will run on centos 4? i want one that will not out and out destroy a machine so to speak... ...meaning testing is one thing, yet pounding a box in the hard drive department over and above the cause or unnecessarily does not appeal to me. fyi the box i want to test/stress this time
2005 Dec 05
2
slow usb hard disk performance.
Dear All, I tried a USB2 Maxtor One touch II external hard disk on a couple of my Centos 4.2 boxes and found it initiallised the SCSI subsystem ok and added device "sda". But the performance is miserable, yet the same hardware running XP the performance is satisfactory. HDPARM gives results varying from 120k/sec to , at its peak 4.75M/s on a USB 2 machine, still very poor by any
2012 Oct 24
5
Multiple resource definition error
Hi, So, I am writing a module to install and configure ucarp. There is only one module in puppet forge and that is not that good. In ucarp, same configuration files have to be served on two servers and to configure the host I am defining a custom resource ucarp::host::config. So, for using, I''ll have to create this resource two times on two different servers. So, this resource
2006 Jan 06
2
3ware disk failure -> hang
I've got an i386 server running centos 4.2 with 3 3ware controllers in it -- an 8006-2 for the system disks and 2 7500-8s. On the 7500s, I'm running an all software RAID50. This morning I came in to find the system hung. Turns out a disk went overnight on one of the 7500s, and rather than a graceful failover I got this: Jan 6 01:03:58 $SERVER kernel: 3w-xxxx: scsi2: Command
2011 Apr 09
2
when one of the server down ,the delay too long
I have two GlusterFS server and the volume status is replica . The add of them are server01:192.168.1.10 and server02:192.168.1.11 . The client mount the server01'vol and I can use the GLusterFS usually. Now, l am reading the file on the glusterfs volume usually, the server02's interface down suddenly ? and the client is also down. It will resume after a delay(about 10s),I think
2004 Oct 21
3
Ucarp and shorewall
Has anyone successfully setup a shorewall Ucarp solution?
2005 Nov 23
2
[OT] Message-ID Threading w/Subject Append Example -- WAS: pine rpm for centos 4
On Tue, 22 Nov 2005 11:29:00 -0500 (EST), Joshua Baker-LePain <jlb17 at duke.edu> wrote: > I'm sorry, but making decisions based on Stupid User Tricks is about > the worst policy I can imagine. That way lies madness. No, where lies madness is in the self-centred way in which some people make demands of others to alter innocuous behaviour so that the data requirements of
2011 Mar 22
2
Why does glusterfs has nfs stuff on the server
When I installed gluster and do a "ps" on the process I see: /usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log" My question is why did glusterfs use nfs-server.vol, nfs.pid and nfs.log instead of using some generic name. This is confusing and makes me think it's using nfs somehow on the server even though
2011 Apr 21
1
ESXi & Gluster setup options
All, We are in the process of determining a virtualized infrastructure and wanted to hear from current users of Gluster and VMWare. What we were looking to setup was an HA ESXi cluster (2 heads) with gluster backend (4 bricks to start, replicated/distributed), all backend connectivity would be 10Gbe. Mainly the storage would be for VM images but may include NAS files later. So our
2010 Dec 08
1
NFS with UCARP vs. GlusterFS mount question
Morning Folks, should I prefer NFS with UCARP or native GlusterFS mounts for serving the system images to XCP? Which one performes better over 1G network links? NFS is probaby easier to setup due to existing tools like rpcinfo and showmount, both are used inside the storage container code, and there is some code for NFS, not for GlusterFS, except I write one. UCARP has the disadvantage that
2005 Feb 05
9
Hot Fallover
Hello List: Recently our shorewall FW server went dead (PS failure) & brought the entire system down. Luckily we are testing the FW and other servers, so we did not loose anything. Now we have decided to setup two Shorewall FW servers with a primary & another fallover FW server. I have done some research cruised the Internet and found that a product ''UCARP''
2011 Aug 12
2
Replace brick of a dead node
Hi! Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to. I have a two node cluster, with two bricks replicated, one on each node. Lets say one of the node dies and is unreachable. I want to be able to spin a new node and replace the dead node's brick to a location on the new node. The command 'gluster volume