similar to: NFS and ZFS, a fine combination

Displaying 20 results from an estimated 3000 matches similar to: "NFS and ZFS, a fine combination"

2009 Dec 02
10
Separate Zil on HDD ?
Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several ''lab'' ESX virtual machines NFS : mythtv storage (videos, music, recordings etc) Samba : for home directories for all networked PCs I backup the important data to external USB hdd each day. I previously had
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2006 Jun 22
2
ZFS throttling - how does it work?
Hi zfs-discuss, I have some questions about throttling on ZFS 1) I know that throttling is activating while one sync is waiting for another. (http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs) Is it possible to throttle only selected processes (e.g. nfsd) ? 2) How can I obtain some statistics about it? I want to know how often throttling is activating on my host etc. 3) Is it
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2015 Oct 09
3
reverse object creation
Dear all, this is my first message to this mailing list - please advise if it is not the right place for the subject I've been using R very intensively the last 3-4 years and one of the most tedious tasks is modification of lookup or conversion tables So far, I have not found functions that create the commands for creating objects (vectors, data frames) based on the objects themselves -
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi. I''m all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are the bottleneck, so I''m going to use not more than 4 disks, probably.
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
On Mon, Jul 24, 2023 at 9:04?PM Danilo Krummrich <dakr at redhat.com> wrote: > On 7/22/23 17:12, Faith Ekstrand wrote: > > On Wed, Jul 19, 2023 at 7:15?PM Danilo Krummrich <dakr at redhat.com > > <mailto:dakr at redhat.com>> wrote: > > > > This commit provides the implementation for the new uapi motivated > > by the > > Vulkan
2006 Aug 08
1
Client/server test harness - Crucible 1.6
Hi all, At OLS last month I presented about doing automated client/server testing of NFSv4. In and after that talk there was some discussion with Steve French about using the same framework for testing Samba, so I thought it might be worthwhile to post about the framework on this list. We've also just put out a new 1.6 release of Crucible; I've attached the release notice below. The
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
Hi, I deployed ZFS on our mailserver recently, hoping for eternal peace after running on UFS and moving files witch each TB added. It is mailserver - it''s mdirs are on ZFS pool: capacity operations bandwidth pool used avail read write read write ------------------------- ----- ----- ----- ----- ----- -----
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
On 7/22/23 17:12, Faith Ekstrand wrote: > On Wed, Jul 19, 2023 at 7:15?PM Danilo Krummrich <dakr at redhat.com > <mailto:dakr at redhat.com>> wrote: > > This commit provides the implementation for the new uapi motivated > by the > Vulkan API. It allows user mode drivers (UMDs) to: > > 1) Initialize a GPU virtual address (VA) space via the new
2009 Jan 23
1
ZIL FOID
I need some clarification on the FOID handed to zil_commit. I wrote a dscript to watch entry and return of zil_commit_writer. Here is an example output: <pre> 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211310 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211324 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211386
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)