similar to: NFS performance issue

Displaying 20 results from an estimated 2000 matches similar to: "NFS performance issue"

2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2009 Dec 30
1
Boot from external degraded zpool
Hi! I wonder if the following scenario works: I have a mac mini running as an OSOL box. The OS is installed on the internal hard drive on the vdrive rpool. On rpool there is no redundancy. If I add an external block device (USB / Firewire) to rpool to mirror the internal hard drive and if the internal hard drive fails, can I reboot the system with the detached internal drive but with the
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS will crash, and the whole zpool is permanently gone, even after reboots. Using opensolaris,
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi. I''m all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are the bottleneck, so I''m going to use not more than 4 disks, probably.
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks, This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device? The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update apt-clone upgrade Any first impressions? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol? I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference? Thanks. Sent from a
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2010 Oct 28
0
Good write, but slow read speeds over the network
Hi all, I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I am experiencing different speeds when when writing to and reading from the pool. The pool itself consists of two FC LUNs that each build a vdev (no comments on that please, we discussed that already! ;) ). Now, I am having a couple of AFP clients that access this pool either via FastEthernet or even GiBitEthernet.
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and have the impression that a lot of folks here are (like me) interested in getting a viable solution for a cheap, fast and reliable ZIL device. I think I can provide such a solution for about $200, but it involves a lot of development work. The basic idea: the main problem when using a HDD as a ZIL device are the cache flushes
2010 Aug 25
6
(preview) Whitepaper - ZFS Pools Explained - feedback welcome
Hello list, while following this list for more then 1 year, I feel that this list was a great way to get insights into ZFS. Thank you all for contributing. Over the last month''s I was writing a little "whitepaper" trying to consolidate the knowledge collected here. It has now reached a "beta" state and I would like to share the result with you. I call it -
2011 Mar 01
14
Good SLOG devices?
Hi I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here.... Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi, I don''t know if it''s already been discussed here, but while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device it stroke me that this might not be a such a good idea. Because this SSD is MLC based, write cycles are an issue here, though I can''t find any number in their spec. Why do I