similar to: New ZFS Intent Log (ZIL) device available - Beta program now open!

Displaying 20 results from an estimated 20000 matches similar to: "New ZFS Intent Log (ZIL) device available - Beta program now open!"

2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2009 Jun 30
21
ZFS, power failures, and UPSes
Hello, I''ve looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply dying, etc.) Does it consistently gracefully recover? Should having a UPS be considered a (strong) recommendation or
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and have the impression that a lot of folks here are (like me) interested in getting a viable solution for a cheap, fast and reliable ZIL device. I think I can provide such a solution for about $200, but it involves a lot of development work. The basic idea: the main problem when using a HDD as a ZIL device are the cache flushes
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2011 Aug 11
19
Intel 320 as ZIL?
Are any of you using the Intel 320 as ZIL? It''s MLC based, but I understand its wear and performance characteristics can be bumped up significantly by increasing the overprovisioning to 20% (dropping usable capacity to 80%). Anyone have experience with this? Ray
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun. For more explanation please see the thread "Crucial RealSSD C300 and cache flush?" This time I made sure the device is attached via 3GBit SATA. This is also only a short test. I''ll retest after some weeks of usage. cache enabled, 32 buffers, 64k blocks linear write, random data: 96 MB/s linear read, random data: 206 MB/s linear
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several mentions of LSI based controllers on the mailing lists and I''m wondering about them. One obvious difference is that the Marvel contoller is PCI-X and the LSI controllers are PCI-E. Supermicro have several LSI controllers. AOC-USASLP-L8i with the
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks, This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device? The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2008 Jul 31
9
Terrible zfs performance under NFS load
Hello, We have a S10U5 server sharing with zfs sharing up NFS shares. While using the nfs mount for a log destination for syslog for 20 or so busy mail servers we have noticed that the throughput becomes severly degraded shortly. I have tried disabling the zil, turning off cache flushing and I have not seen any changes in performance. The servers are only pushing about 1MB/s of constant
2010 Jan 11
5
internal backup power supplies?
With all the recent discussion of SSD''s that lack suitable power-failure cache protection, surely there''s an opportunity for a separate modular solution? I know there used to be (years and years ago) small internal UPS''s that fit in a few 5.25" drive bays. They were designed to power the motherboard and peripherals, with the advantage of simplicity and efficiency
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Scenario A: 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition versus Scenario B: 2 TB Mirror w/ 64 GB read cache SSD 2 TB