similar to: Pool import with failed ZIL device now possible ?

Displaying 20 results from an estimated 3000 matches similar to: "Pool import with failed ZIL device now possible ?"

2010 Feb 20
6
l2arc current usage (population size)
Hello, How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc or l2arc? -- This message posted from opensolaris.org
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2010 Jan 31
5
server hang with compression on, ping timeouts from remote machine
Hello All, I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM. Whenever I start copying files from Windows onto the ZFS disk, after about 100-200 Mb been copied the server starts to experience freezes. I have iostat running, which freezes as well. Even pings on both of the network adapters are reporting
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2012 Dec 01
3
6Tb Database with ZFS
Hello, Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I want to set arc_max parameter so ZFS cant use all my system''s memory, but i dont know how much i should set, do you think 24Gb will be enough for a 6Tb database? obviously the more the better but i cant set too much memory. Have someone implemented succesfully something similar? We ran some test and the
2008 Oct 26
4
Cannot remove slog device from zpool
Hello, I''ve looked quickly through the archives and haven''t found mention of this issue. I''m running SXCE (snv_99), which I believe uses zfs version 13. I had an existing zpool: -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081026/a5e2f25b/attachment.html>
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol? I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference? Thanks. Sent from a
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2008 Apr 12
5
ZVOL access permissions?
How can I set up a ZVOL that''s accessible by non-root users, too? The intent is to use sparse ZVOLs as raw disks in virtualization (reducing overhead compared to file-based virtual volumes). Thanks, -mg This message posted from opensolaris.org
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi. I''m all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are the bottleneck, so I''m going to use not more than 4 disks, probably.
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss, What would you rather propose for ZFS+ORACLE - zvols or just files from the performance standpoint? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks, This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device? The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2012 Nov 20
6
zvol wrapped in a vmdk by Virtual Box and double writes?
Hi folks, (Long time no post...) Only starting to get into this one, so apologies if I''m light on detail, but... I have a shiny SSD I''m using to help make some VirtualBox stuff I''m doing go fast. I have a 240GB Intel 520 series jobbie. Nice. I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb. As part of my work, I have used it both as a RAW
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update apt-clone upgrade Any first impressions? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE