similar to: ZFS works in waves

Displaying 20 results from an estimated 5000 matches similar to: "ZFS works in waves"

2007 Dec 27
1
libata and PMP (Port Multiplier) support in CentOS
I'm looking at buying a NORCO DS-1220 and it comes with a NORCO 4618 PCI-X card (4 port eSATA, with a port multiplier on the card or the chassis). I've been trying to pin down whether or not CentOS will see all 12 drives, and I haven't seen anything that is definitive. I see that the kernel module sata_sil24 supports the SiI3124 chip on the NORCO 4618 card, but I haven't found
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here. I was interested to see how much sequential read/write performance it would be possible to obtain from ZFS running on commodity hardware with modern features such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI, SiI3132/SiI3124). So I made this experiment of building a fileserver by
2009 Jun 01
3
External SATA enclosures: SiI3124 and CentOS 5?
Tired of "little problems" trying to keep 7 drives working in an old desktop computer, I'm considering an external SATA drive enclosure with a controller card based on the Sil3124. http://www.ipcdirect.net/servlet/Detail?no=152 I'm a bit concerned about long-term support, namely that the company's driver page only lists drivers through RedHat 4.
2006 Dec 08
22
ZFS Usage in Warehousing (lengthy intro)
Dear all, we''re currently looking forward to restructure our hardware environment for our datawarehousing product/suite/solution/whatever. We''re currently running the database side on various SF V440''s attached via dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is (obviously in a SAN) shared between many systems. Performance is mediocre in terms
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2007 Oct 09
9
Norco''s new storage appliance
Hey all, Has anyone else noticed Norco''s recently-announced DS-520 and thought ZFS-ish thoughts? It''s a five-SATA, Celeron-based desktop NAS that ships without an OS. http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-520 What practical impact is a 32-bit processor going to have on a ZFS system? (I know this relies on speculation, but) Might anyone know
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2007 Jul 07
12
ZFS Performance as a function of Disk Slice
First Post! Sorry, I had to get that out of the way to break the ice... I was wondering if it makes sense to zone ZFS pools by disk slice, and if it makes a difference with RAIDZ. As I''m sure we''re all aware, the end of a drive is half as fast as the beginning ([i]where the zoning stipulates that the physical outside is the beginning and going towards the spindle increases hex
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0
2006 Nov 27
5
startx reboots my computer
Well, sometimes startx reboots my computer. Other times is does nothing except slow it to a halt for a while. Sometimes is locks up. I am using init level 3. At the command line, everything looks fine, and I can do whatever commands I want. When I type startx, then things go down the tubes. What doesn't happen: X never starts No error messages get posted to dmesg no log is generated in
2006 Mar 08
5
SWAT is working but shows smbd/nmbd: not running
Hi all, I had nForce3 motherboard that has broken. Than I bought nForce4 motherboard. I connected old hard disk to this new nForce4 motherboard. For net and sound support I download drivers from nvidia.com site. This driver asks kernel source to be on local disk and gcc compiler. After I installed kernel source and gcc, driver is successfully installed. >From that point (but maybe it is
2009 Sep 25
6
SATA vs RAID5 vs VMware
Hello, I have strange behaviour on a server that I can't get a handle on. I have a reasonably powerful server running VMware server 1.0.4-56528. It has a RAID5 build with mdadm on 5 SATA drives. Masses of ram and 2 XEON CPUs. But it stutters. Example : fire up vi, press and keep finger on i. After filling 2-3 lines, the display is stopped for 2-12 seconds. Then they continue. This
2005 Jun 22
11
Opteron Mobo Suggestions
I've been planning to build a dual Opteron server for awhile. I'd like to get people's suggestions on a suitable motherboard. I've looked at the Tyan K8SE (S2892) and K8SRE (S2891) but would like to find more Linux-specific experiences with these boards. Some features I expect are at least 4 SATA (SATA-300?) ports, serial console support in the BIOS, USB 2.0 and IEEE-1394
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2009 Nov 13
11
scrub differs in execute time?
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? -- This message posted from opensolaris.org
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2005 Jun 10
2
Tyan K8SE (S2892) / nForce Pro Experiences (Clarification)
From: "Bryan J. Smith <b.j.smith at ieee.org>" <thebs413 at earthlink.net> > I wrote a pre-sale evaluation back in January 2005 here: > http://lists.leap-cf.org/pipermail/leaplist/2005-January/000532.html > ... Tyan S2895 -- nForce Pro 2200+2050 Just know that the pre-sale evaluation was of the nForce4, and didn't take the nForce Pro 2200 and 2200+2050
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.