Displaying 20 results from an estimated 20000 matches similar to: "ZFS Usage in Warehousing (lengthy intro)"
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario).
The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID.
Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2009 Jan 06
11
zfs list improvements?
To improve the performance of scripts that manipulate zfs snapshots and the zfs snapshot service in perticular there needs to be a way to list all the snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that cover this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6352014 :
''zfs list'' should have an option to only present direct
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All,
I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2010 Mar 24
21
ZFS on a 11TB HW RAID-5 controller
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have linux experience, but have never used ZFS. I have tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I can only use one 2TB disk, and I cannot partition the rest. I realize that maximum partition size is 2TB, but I guess the rest must be usable. For
2007 Sep 04
23
I/O freeze after a disk failure
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52):
Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2010 Mar 18
2
lazy zfs destroy
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let''s say I don''t care that this zfs destroy finishes quickly. I actually don''t care, as long as it finishes before I run out of disk space.
So a
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way mirrored setup of N disks you get the write performance of
the worst disk and a read performance that is
2009 Jul 27
10
sam-fs on zfs-pool
Hi list,
I''ve did some tests and run into a very strange situation..
I created a zvol using "zfs create -V" and initialize an sam-filesystem
on this zvol.
After that I restored some testdata using a dump from another system.
So far so good.
After some big troubles I found out that releasing files in the
sam-filesystem doesn''t create space on the underlying zvol.
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2007 Aug 27
17
statvfs change
An issue was found with the netBeans installer where
the installation was failing on a large ZFS filesystem.
This resulted in CR 6560644 (zfs statvfs f_frsize needs work).
The issue is that large filesystems can cause EOVERFLOW on
statvfs() calls. This behavior is documented in the statvfs(2)
man page, but I think we can do better.
The problem was initially reported against ZFS, and my first
fix
2006 Feb 14
2
Customer questions
A few questions:
1. How does ZFS compare to Luster?
2. Is ZFS compatible with Veritas Netbackup Suite?
3. Does ZFS support bare metal restore (BMR)?
Pointers welcome!!
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060214/328c6cc4/attachment.html>
2006 May 31
12
3510 configuration for ZFS
hi all,
I am hoping to move roughly 1TB of maildir format email to ZFS, but
I am unsure of what the most appropriate disk configuration on a 3510
would be.
based on the desired level of redundancy and usable space, my thought
was to create a pool consisting of 2x RAID-Z vdevs (either double
parity, or single parity with two hot-spares). using 300GB drives
this would give roughly 2.4TB of usable
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool