similar to: zfs send speed

Displaying 20 results from an estimated 2000 matches similar to: "zfs send speed"

2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks, Myself and a colleague are currently involved in a prototyping exercise to evaluate ZFS against our current filesystem. We are looking at the best way to arrange the disks in a 3510 storage array. We have been testing with the 12 disks on the 3510 exported as "nraid" logical devices. We then configured a single ZFS pool on top of this, using two raid-z arrays. We are getting
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2007 May 02
16
ZFS Support for remote mirroring
Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven''t heard anything lately. Thanks! Aaron Newcomb http://opennewsshow.org http://thesourceshow.org This message posted from opensolaris.org
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow different levels of replication for different filesystems. Your comments are appreciated! --matt A. INTRODUCTION ZFS stores multiple copies of all metadata. This is accomplished by storing up to three DVAs (Disk Virtual Addresses) in each block pointer. This feature is known as "Ditto Blocks". When
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow different levels of replication for different filesystems. Your comments are appreciated! --matt A. INTRODUCTION ZFS stores multiple copies of all metadata. This is accomplished by storing up to three DVAs (Disk Virtual Addresses) in each block pointer. This feature is known as "Ditto Blocks". When
2006 Nov 01
56
ZFS/iSCSI target integration
Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I''ll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8<--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time
2007 Jun 15
3
zfs and EMC
Hi there, have a strange behavior if i?ll create a zfs pool at an EMC PowerPath pseudo device. I can create a pool on emcpower0a but not on emcpower2a zpool core dumps with invalid argument .... ???? Thats my second maschine with powerpath and zfs the first one works fine, even zfs/powerpath and failover ... Is there anybody who has the same failure and a solution ? :) Greets Dominik
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
Hi Guys, I''m playing with Blade 6300 to check performance of compressed ZFS with Oracle database. After some really simple tests I noticed that default (well, not really default, some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on
2008 Mar 13
4
Disabling zfs xattr in S10u4
Hi, I want to disable extended attributes in my zfs on s10u4. I found out that the command to do is zfs set xattr=off <poolname>. But, I do not see this option in s10u4. How can I disable zfs extended attributes on s10u4? I''m not in the zfs-discuss alias. Please respond to me directly. Thanks Balaji
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2008 Mar 25
11
Failure to instal S10U4 HVM at SNV85 Dom0
System config:- bash-3.2# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=201004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4,CoS> mtu 1500 index 2 inet 192.168.1.53 netmask ffffff00 broadcast 192.168.1.255 ether 0:1e:8c:25:cc:a5 lo0:
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor grants for ZFS. Since all of the ZFS core contributors grants are set to expire on 02-24-2009 we need to renew the members that are still contributing at core contributor levels. We should also add some new members to both Contributor and Core contributor levels. First the current list of Core contributors: Bill
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to build 34 or earlier. This putback (into Solaris Nevada build 35) introduced a backwards- compatable change to the ZFS on-disk format. Old pools will be seamlessly accessed by the new code; you do not need to do anything special. However, do *not* downgrade from build 35 or later to build 34 or earlier. If you do so, some of
2007 Oct 24
1
S10u4 in kernel sharetab
There was a log of talk about ZFS and NFS shares being a problem when there was a large number of filesystems. There was a fix that in part included an in kernel sharetab (I think :) Does anyone know if this has made it into S10u4? Thanks, BlueUmp This message posted from opensolaris.org
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432
2007 Jul 13
28
ZFS and powerpath
How much fun can you have with a simple thing like powerpath? Here''s the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It''s now been reconfigured to use powerpath instead of mpxio. My problem is that I can''t import the pool. I get: pool: ###### id:
1999 Jan 20
2
Installation of packages?
Dear r-helpers, we have installation problems: Successful installation of R-0.63 base package on Solaris 2.5.1 with the SunSoft compilers f77, c version 4.2. We habe problems with the installation of further packages e.g. integrate from CRAN. R code works but the shared objects built from fortran code do not find the appropriate libs with functions like __pow_ii or __epx at runtime. We tried