similar to: ZFS send/receive while write is enabled on receive side?

Displaying 20 results from an estimated 10000 matches similar to: "ZFS send/receive while write is enabled on receive side?"

2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html Good stuff for ZFS. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2010 Jan 16
1
Searching log files?
Hi all, has anyone used Xapian/Omega to index and search large amounts of (Unix) server logs? I'm looking to create a search application which will allow me to index and search logs from roughly 20-100 servers but I'm not sure which engine to use that will provide near real time indexing and good search performance. Is this something that can be accomplished with Xapian/Omega? Thanks!
2007 Apr 16
10
zfs send/receive question
Hello folks, I have a question and a small problem... I did try to replicate my zfs with all the snaps, so I did run few commands: time zfs send mypool/d at 2006_month_10 | zfs receive mypool2/d at 2006_month_10 real 6h35m12.34s user 0m0.00s sys 29m32.28s zfs send -i mypool/d at 2006_month_10 mypool/d at 2006_month_12 | zfs receive mypool/d at 2006_month_12 real 4h49m27.54s user
2010 Nov 09
5
X4540 RIP
Oracle have deleted the best ZFS platform I know, the X4540. Does anyone know of an equivalent system? None of the current Oracle/Sun offerings come close. -- Ian.
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2007 Sep 19
7
ZFS Solaris 10 Update 4 Patches
The latest ZFS patches for Solaris 10 are now available: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch ZFS Pool Version available with patches = 4 These patches will provide access to all of the latest features and bug fixes: Features: PSARC 2006/288 zpool history PSARC 2006/308 zfs list sort option PSARC 2006/479 zfs receive -F PSARC 2006/486 ZFS canmount
2012 Nov 09
3
Forcing ZFS options
There are times when ZFS options can not be applied at the moment, i.e. changing desired mountpoints of active filesystems (or setting a mountpoint over a filesystem location that is currently not empty). Such attempts now bail out with messages like: cannot unmount ''/var/adm'': Device busy cannot mount ''/export'': directory is not empty and such. Is it
2008 Jun 27
1
''zfs list'' output showing incorrect mountpoint after boot -Z
I installed snv_92 with zfs root. Then took a snapshot of the root and clonned it. Now I am booting from the clone using the -Z option. The system boots fine from the clone but ''zfs list'' still shows that the ''/'' is mounted on the original mountpoint instead of the clone even though the output of ''mount'' shows that ''/'' is
2007 Oct 18
1
Source code to ZFS GUI?
Hi all, I noticed that the ZFS source tour page at opensolaris.org does not have the ZFS web based GUI source listed yet. http://www.opensolaris.org/os/community/zfs/source/ "Management GUI Solaris will ship with a web-based ZFS GUI in build 28. While not part of OpenSolaris (yet), it is an example of a Java-based GUI layered on top of the JNI." Has this source been made public
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it fails because it DOES exist. I really expected one of those to work. So, what am I confused about now? (Running 2008.11) # zpool import -R /backups/bup-ruin bup-ruin # zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv bup-ruin/fsfs/zp1" cannot receive: specified fs (bup-ruin/fsfs/zp1)
2007 Nov 15
1
ZFS snapshot send/receive via intermediate device
Hey folks, I have no knowledge at all about how streams work in Solaris, so this might have a simple answer, or be completely impossible. Unfortunately I''m a windows admin so haven''t a clue which :) We''re looking at rolling out a couple of ZFS servers on our network, and instead of tapes we''re considering using off-site NAS boxes for backups. We think
2009 Feb 03
3
Time taken Backup using ZFS Send Receive
I have been using ZFS send and receive for a while and I noticed that when I try to do a send on a zfs file system of about 3 gig plus it took only about 3 minutes max. zfs send application/sample at back > /backup/sample.zfs However when I tried to send a file system that''s about 20 gig, it took almost an hour. I would had expected that since 3 gig took 3 mins, then 20 gig should
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool first device died and boot from second not working... i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import http://puu.sh/2402E when i load zfs.ko and opensolaris.ko i see this message: Solaris: WARNING: Can''t open objset for zroot/var/crash Solaris: WARNING: Can''t open objset for zroot/var/crash zpool status:
2009 May 19
2
ESTALE error while synching
Hi, I am wondering how rsync-3.0.6 react if it encounters ESTALE error while synching? If I remember correctly then the rsync-2.6.0 skipping that file/dir in case of ESTALE error. Jignesh. -------------- next part -------------- HTML attachment scrubbed and removed
2008 Jul 29
8
questions about ZFS Send/Receive
Hi guys, we are proposing a customer a couple of X4500 (24 Tb) used as NAS (i.e. NFS server). Both server will contain the same files and should be accessed by different clients at the same time (i.e. they should be both active) So we need to guarantee that both x4500 contain the same files: We could simply copy the contents on both x4500 , which is an option because the "new
2013 Aug 21
1
Properties list for zfs in FreeBSD
Hi: Where can I find a list of properties (-o/-O property=value) for creating a zpool? I meant something like: #zpool create \ -o ashift=12 \ -0 dedup=off -O autoexpand=off -O atime=off \ -O canmount=off \ -O compression=lz4 \ -O normalization=formD \ -O mountpoint=/jail \ tank \ mirror \ /dev/gptid/diskname0 \ /dev/gptid/diskname1 \