similar to: dedicated ZIL/L2ARC

Displaying 20 results from an estimated 2000 matches similar to: "dedicated ZIL/L2ARC"

2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi, I don''t know if it''s already been discussed here, but while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device it stroke me that this might not be a such a good idea. Because this SSD is MLC based, write cycles are an issue here, though I can''t find any number in their spec. Why do I
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2010 Jun 07
20
Homegrown Hybrid Storage
Hi, I''m looking to build a virtualized web hosting server environment accessing files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540 with the following configuration: - 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives) - 2 Intel X-25 32GB SSD''s as a mirrored ZIL - 4 Intel X-25 64GB SSD''s as the L2ARC. -
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2010 Jul 10
2
block align SSD for use as a l2arc cache
I have an Intel X25-M 80GB SSD. For optimum performance, I need to block align the SSD device, but I am not sure exactly how I should to it. If I run the format -> fdisk it allows me to partition based on a cylinder, but I don''t think that is sufficient enough. Can someone tell me how they block aligned an SSD device for use in l2arc. Thanks, Geoff
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2010 Feb 20
6
l2arc current usage (population size)
Hello, How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc or l2arc? -- This message posted from opensolaris.org
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2011 Mar 01
14
Good SLOG devices?
Hi I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here.... Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net
2009 Nov 18
2
ZFS and NFS
Hi, My customer says: ------------------------------------ Application has NFS directories with millions of files in a directory, and this can''t changed. We are having issues with the EMC appliance and RPC timeouts on the NFS lookup. I am looking doing is moving one of the major NFS exports to as Sun 25k using VCS to cluster a ZFS RAIDZ that is then NFS exported. For performance I
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations 4 x x6