Displaying 20 results from an estimated 11000 matches similar to: "Separate Zil on HDD ?"
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi,
as I have learned from the discussion about which SSD to use as ZIL
drives, I stumbled across this article, that discusses short stroking
for increasing IOPs on SAS and SATA drives:
http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a ZIL on a zpool that is mainly used for file
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Apr 27
42
Performance drop during scrub?
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool.
How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting
with this problem and I wanted to throw it out here again.
All of our hardware is from Silicon Mechanics (SuperMicro chassis and
motherboards).
Up until now, all of the hardware has had a single 24-disk expander /
backplane -- but we recently got one of the new SC847-based models with
24 disks up front and 12 in the
2010 Jun 07
20
Homegrown Hybrid Storage
Hi,
I''m looking to build a virtualized web hosting server environment accessing
files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540
with the following configuration:
- 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives)
- 2 Intel X-25 32GB SSD''s as a mirrored ZIL
- 4 Intel X-25 64GB SSD''s as the L2ARC.
-
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2009 Nov 18
2
ZFS and NFS
Hi,
My customer says:
------------------------------------
Application has NFS directories with millions of files in a directory,
and this can''t changed.
We are having issues with the EMC appliance and RPC timeouts on the NFS
lookup. I am looking doing
is moving one of the major NFS exports to as Sun 25k using VCS to
cluster a ZFS RAIDZ that is then NFS exported.
For performance I
2009 Apr 23
1
Unexpectedly poor 10-disk RAID-Z2 performance?
Hail, caesar.
I''ve got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives
everyone''s so fond of. They''ve all received a firmware upgrade (the
sane one, not the one that caused your drives to brick if the internal
event log hit the wrong number on boot).
They''re attached to an ARC-1280ML, a reasonably good SATA controller,
which has 1 GB of ECC DDR2 for
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2009 Apr 27
23
Raidz vdev size... again.
Hi,
i''m new to the list so please bare with me. This isn''t an OpenSolaris
related problem but i hope it''s still the right list to post to.
I''m on the way to move a backup server to using zfs based storage, but i
don''t want to spend too much drives to parity (the 16 drives are attached
to a 3ware raid controller so i could also just use raid6 there).
I
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering
what drives to put in the bays. My chassis is a Supermicro SC846A, so the
backplane supports SAS or SATA; my controllers are LSI3081E, again
supporting SAS or SATA.
Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM
drive in both SAS and SATA configurations; the SAS model offers
2009 Jun 30
21
ZFS, power failures, and UPSes
Hello,
I''ve looked around Google and the zfs-discuss archives but have not been
able to find a good answer to this question (and the related questions
that follow it):
How well does ZFS handle unexpected power failures? (e.g. environmental
power failures, power supply dying, etc.)
Does it consistently gracefully recover?
Should having a UPS be considered a (strong) recommendation or