Displaying 20 results from an estimated 3000 matches similar to: "ZFS and NFS"
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2009 Dec 02
10
Separate Zil on HDD ?
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several ''lab'' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networked PCs
I backup the important data to external USB hdd each day.
I previously had
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2010 Apr 27
42
Performance drop during scrub?
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool.
How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a
RAIDZ2 exported over fibre channel) but there''s no such thing as too much
speed, and these other two drive bays are just begging for drives in
them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, or will the extra parity writes reduce speed, or will the two factors
offset and leave things
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message posted from opensolaris.org
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2009 Apr 23
1
Unexpectedly poor 10-disk RAID-Z2 performance?
Hail, caesar.
I''ve got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives
everyone''s so fond of. They''ve all received a firmware upgrade (the
sane one, not the one that caused your drives to brick if the internal
event log hit the wrong number on boot).
They''re attached to an ARC-1280ML, a reasonably good SATA controller,
which has 1 GB of ECC DDR2 for
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2009 Jun 30
21
ZFS, power failures, and UPSes
Hello,
I''ve looked around Google and the zfs-discuss archives but have not been
able to find a good answer to this question (and the related questions
that follow it):
How well does ZFS handle unexpected power failures? (e.g. environmental
power failures, power supply dying, etc.)
Does it consistently gracefully recover?
Should having a UPS be considered a (strong) recommendation or
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread "Crucial RealSSD C300 and cache
flush?"
This time I made sure the device is attached via 3GBit SATA. This is also
only a short test. I''ll retest after some weeks of usage.
cache enabled, 32 buffers, 64k blocks
linear write, random data: 96 MB/s
linear read, random data: 206 MB/s
linear
2008 Jul 31
9
Terrible zfs performance under NFS load
Hello,
We have a S10U5 server sharing with zfs sharing up NFS shares. While using the nfs mount for a log destination for syslog for 20 or so busy mail servers we have noticed that the throughput becomes severly degraded shortly. I have tried disabling the zil, turning off cache flushing and I have not seen any changes in performance. The servers are only pushing about 1MB/s of constant
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume