similar to: zfs send/recv reliability

Displaying 20 results from an estimated 7000 matches similar to: "zfs send/recv reliability"

2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2009 Feb 17
32
Backing up ZFS snapshots
I have an OpenSolaris snv_105 server at home that holds my photos, docs, music, etc, in a zfs pool. I backup my laptops with rsync to the OpenSolaris server. All of my important data is in one place, on the OpenSolaris server. I want to backup this data. I want to protect against losing my data, and I would also like to recover previous versions of files when I make mistakes. * I do not have a
2008 Feb 21
37
Preferred backup s/w
Hi all, What is the current preferred method for backing up ZFS data pools, preferably using free ($0.00) software, and assuming that access to individual files (a la ufsbackup/ufsrestore) is required? TIA, -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer
2008 Dec 08
5
How to use mbuffer with zfs send/recv
>> How do i compile mbuffer for our system, Thanks to Mike Futerko for help with the compile, i now have it installed OK. >> and what syntax to i use to invoke it within the zfs send recv? Still looking for answers to this one? Any example syntax, gotchas etc would be much appreciated. -- Kind regards, Jules free. open. honest. love. kindness. generosity. energy. frenetic.
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello, We have a new Thor here with 24TB of disk in (first of many, hopefully). We are trying to determine the bext practices with respect to file system management and sizing. Previously, we have tried to keep each file system to a max size of 500GB to make sure we could fit it all on a single tape, and to minimise restore times and impact should we experience some kind of volume
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2009 Feb 04
26
ZFS snapshot splitting & joining
Hello everyone, I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD''s for offsite storage. In many cases, the snapshots greatly exceed the 8GB I can stuff onto a single DVD-DL. In order to make this work, I have used the "split" utility to break the images into smaller, fixed-size chunks that will fit onto a DVD. For example: #split -b8100m
2010 Apr 23
5
Data movement across filesystems within a pool
I would have thought that the file movement from one FS to another within the same pool would be almost instantaneous. Why does it take to platter for such a movement? # time cp /tmp/blockfile /pcshare/1gb-tempfile real 0m5.758s # time mv /pcshare/1gb-tempfile . real 0m4.501s Both FSs are with compression=off. /tmp is RAM. -devsk -- This message posted from opensolaris.org
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote: > > I would suggest that you follow my recipe: not check the boot-archive > during a reboot. And then report back. (I''m assuming that that will take > several weeks) > We are back at square one; or, at the subject line. I did a zpool status -v, everything was hunky dory. Next, a power failure, 2 hours later, and this is what zpool status
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403 and another review here: http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html Regards, -- Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005
2008 May 21
11
Per-user home filesystems and OS-X Leopard anomaly
I encountered an issue that people using OS-X systems as NFS clients need to be aware of. While not strictly a ZFS issue, it may be encounted most often by ZFS users since ZFS makes it easy to support and export per-user filesystems. The problem I encountered was when using ZFS to create exported per-user filesystems and the OS-X automounter to perform the necessary mount magic. OS-X