similar to: Help needed with zfs send/receive

Displaying 20 results from an estimated 3000 matches similar to: "Help needed with zfs send/receive"

2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Mar 09
3
cannot mount ''/export'' directory is not empty
Hello, I am desperate. Today I realized that my OS 108 doesn''t want to boot. I have no idea what I screwed up. I upgraded on 108 last week without any problems. Here is where I''m stuck: Reading ZFS config: done. Mounting ZFS filesystems: (1/17) cannot mount ''/export'': directory is not empty (17/17) $ svcs -x svc:/system/filesystem/local:default (local file
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all, I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny. I installed default snv_129, installed guest additions -> reboot, set
2008 Jul 31
17
Can I trust ZFS?
Hey folks, I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger. I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows
2009 Mar 28
3
zfs scheduled replication script?
I have a backup system using zfs send/receive (I know there are pros and cons to that, but it''s suitable for what I need). What I have now is a script which runs daily, do zfs send, compress and write it to a file, then transfer it with ftp to a remote host. It does full backup every 1st, and do incremental (with 1st as reference) after that. It works, but not quite resource-effective
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3): zfs snapshot -r tank@`date +%F_%T` then trid to send it to the remote host: zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs receive tank/tankbackup'' but got the error "zfs: command not found" since user is not superuser, even though it is in the root group. I found
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2010 Aug 13
26
DO NOT REPLY [Bug 7618] New: symlinks and --link-dest
https://bugzilla.samba.org/show_bug.cgi?id=7618 Summary: symlinks and --link-dest Product: rsync Version: 3.0.7 Platform: Other OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: the_majkl at seznam.cz QAContact:
2008 Dec 08
5
How to use mbuffer with zfs send/recv
>> How do i compile mbuffer for our system, Thanks to Mike Futerko for help with the compile, i now have it installed OK. >> and what syntax to i use to invoke it within the zfs send recv? Still looking for answers to this one? Any example syntax, gotchas etc would be much appreciated. -- Kind regards, Jules free. open. honest. love. kindness. generosity. energy. frenetic.
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2011 Jun 27
2
Using TSM to back-up glusterfs
Hi We have been trying back-up a glusterfs (v3.1.4) area using the Tivoli TSM software to an off-site area. The back-up keeps failing with the following typical error messages 06/14/2011 22:22:58 ANS1587W I/O error reading file attributes for: /gdata/projects/philex/OAG/2011/May16/mdor3km10/coast_den2.in. errno = 22, Invalid argument 06/14/2011 22:22:59 ANS4007E Error processing
2011 Feb 17
2
BUG: SAMBA 3.5.x and IBM TSM
Hi! I am using SAMBA 3.5.x and it doesn't work with IBM TSM. IBM TSM works properly with SAMBA 3.2.15. Is there any chance to solve this issue in future SAMBA versions? Best regards /Adrian Berlin -- You Rock! Your E-Mail Should Too! Signup Now at Rock.com and get 2GB of Storage! http://connections.rock.com/user/displayUserRegisterPage.kickAction?as=116748&STATUS=MAIN
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Feb 04
8
Data loss bug - sidelined??
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932 There have been no updates to the bug since September, and nobody seems to be assigned to it. Can somebody let me know what''s happening with this
2009 Jul 19
3
Opensolaris domU unable to get dhcp lease
I''m running a ubuntu 9.04 64 bit dom0 with kernel 2.6.29.6 and xen 3.4.0. My eth0 is bridged to br0 and to my guest VMs. The dom0 is running a dhcp server on br0 which is able to provide leases to physical machines on eth0, and also to a windows xp domU which is bridged to br0. However my open solaris 2009.6 domU is unable to obtain a dhcp lease. On the opensolaris side I can see this:
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?