Displaying 20 results from an estimated 4000 matches similar to: "Ext3 and 3ware RAID5"
2006 Oct 01
4
3Ware 9550SX-4LP Performance
I know there are a few 3Ware fans here and I was hoping to find some help. I
just built a new server using a 3Ware 9550SX-4LP with four disks in raid 5.
The array is fully initialized but I'm not getting the write performance I was
hoping for -- only 40 to 45MB/Sec.
3Ware's site advertises 300MB/Sec writes using 8 disks on the PCI Express
version of this card (the 9580 I think.)
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than
approximately 40MB/s on an ext2 file system. IMO, this is horrible
performance for a 6-drive, hardware RAID 5 array. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive,
2008 Jun 22
8
3ware 9650 issues
I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's
coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it,
running as a single RAID6 w/ a hot spare. These issues boil down to the card
periodically throwing errors like the following:
sd 1:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Usually when this
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see
if anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID
1 plus 2 hot spare config. The array is properly initialized, write
cache is on, as is queueing (and supported by the drives). StoreSave
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody.
I have a problem setting up gluster failover funcionality. Based on
manual i setup ucarp which is working well ( tested with ping/ssh etc
)
But when i use virtual address for gluster volume mount and i turn off
one of nodes machine/gluster will freeze until node is back online.
My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In
gluster log i can see:
[2011-06-06
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2005 Nov 30
2
Too much memory cache being used while moving large file
System :
CentOS 4.2
2.6.9-22.0.1.ELsmp
System fully up-to-date.
3GB RAM
3ware 9000S card, with Raid5 array.
I think that's about all relevant info ...
Had file on disk (not array), attempted to mv file to array.
Went fine till 2.4GB was copied, then it slowed down to a meg every few
minutes.
Free memory was ~50MB (Typically is 1.5-2GB), and cache was 2.5GB.
Stopped the move, however cache
2006 Jan 06
2
3ware disk failure -> hang
I've got an i386 server running centos 4.2 with 3 3ware controllers in it
-- an 8006-2 for the system disks and 2 7500-8s. On the 7500s, I'm
running an all software RAID50. This morning I came in to find the system
hung. Turns out a disk went overnight on one of the 7500s, and rather
than a graceful failover I got this:
Jan 6 01:03:58 $SERVER kernel: 3w-xxxx: scsi2: Command
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual
3ware 7500-8 based systems to md, I decided I'd like to go with RAID6
(since md is less tolerant of marginal drives than is 3ware). I did some
benchmarking and was getting decent speeds with a 128KiB chunksize.
So the next step was failure testing. First, I fired off memtest.sh as
found at
2006 Jul 12
6
Speaking of 3Ware/RAID/SATA/9550SX...
I just built up a server running CentOS 4.3, and trying to boot from a 3Ware
9550SX-16ML, 16 port card. This system has 9x500GB drives, AMD Opteron, Tyan
K8SD mobo and 1 GB of ram. I downloaded the latest (9.3.0.4) driver from
3Ware, and after a couple of botched attempts, I got CentOS installed. In my
two botched attempts, the problem was X hanging during the formatting of the
drives, so I
2009 Nov 18
2
simple NFSv4 setup
I'm trying to setup a simple NFSv4 mount between two x86_64 hosts. On the
server, I have this in /etc/exports:
/export $CLIENT(ro,fsid=0)
/export/qb3 $CLIENT(rw,nohide)
ON $CLIENT, I mount via:
mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3
However:
$ touch /usr/local/sge62/qb3/foo
touch: cannot touch `/usr/local/sge62/qb3/foo': Read-only file system
I'd really
2006 Jun 20
3
SuperMicro X7DBE with CentOS4?
I am planning to build a server based on the SuperMicro X7DBE+-O
motherboard. This server will have two dual-core Xeon processors and
a 3ware 9550SX raid card.
Has anyone built a server based on this motherboard? Are there any
issues with the smp/dual-core support in CentOS4 that might cause
problems?
I appreciate any input. I'm just looking to find out about any
possible problems before
2009 Jan 15
2
3Ware 9650SE tuning advice
Hello fellow sysadmins!
I've assembled a whitebox system with a SuperMicro motherboard, case,
8GB of memory and a single quad core Xeon processor.
I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB
SATA drives total. Three drives per "lane" on each card.
CentOS 5.2 x86_64.
I'm looking for advice on tuning this thing for performance.
Especially for the
2006 Nov 09
2
How to create a huge file system - 3-4TB?
We have a server with about 6x750Gb SATA drives setup on a hardware RAID
controller. We created hardware RAID 5 on these 6x750GB HDDs. The effective
size after RAID 5 implementation is 3.4TB. This server we want to use it as
a data backup server.
Here is the problem we are stuck with, when we use fdisk -l, we can see the
drive specs and its size as 3.4TB. But when we want to create two different
2009 Feb 11
4
smartd and 3ware 9xxx configs
I'm looking to do a bit more monitoring of my 3ware 9550 with smartd,
and wanted to see what others were doing with smart for monitoring
3ware hardware.
Do you have the smartd.conf configured to test, or simply monitor health status?
Are you monitoring the drive as centos sees it (/dev/sdX) or are you
using the 3ware /dev/twaX for monitoring?
Opinions and discussions are welcome :-P
--
2005 Dec 27
1
amd64 benchmarks
Has anyone here benchmarked 64-bit 4.2 against a dual core opteron (or
Athlon 64x2) vs a pair of physical single-core opterons? It's that time
of year again....ordering new workstations. 8-)
Cheers,
2005 Nov 22
3
server exercising, stressing, and/or testing
greetings
would someone please point me to an excellent server exercising, stressing,
and/or testing program that will run on centos 4?
i want one that will not out and out destroy a machine so to speak...
...meaning testing is one thing, yet pounding a box in the hard drive
department over and above the cause or unnecessarily does not appeal to me.
fyi the box i want to test/stress this time
2005 Dec 05
2
slow usb hard disk performance.
Dear All,
I tried a USB2 Maxtor One touch II external hard disk on a couple of my
Centos 4.2 boxes and found it initiallised the SCSI subsystem ok and
added device "sda". But the performance is miserable, yet the same
hardware running XP the performance is satisfactory.
HDPARM gives results varying from 120k/sec to , at its peak 4.75M/s on a
USB 2 machine, still very poor by any
2006 Oct 02
6
Calling All FS Fanatics
Now that I've been enlightened to the terrible write performance of ext3 on my
new 3Ware RAID 5 array, I'm stuck choosing an alternative filesystem. I
benchmarked XFS, JFS, ReiserFS and ext3 and they came back in that order from
best to worst performer.
I'm leaning towards XFS because of performance and because centosplus makes
kernel modules available for the stock kernel.