Displaying 20 results from an estimated 30000 matches similar to: "zfs send/receive with different on disk versions"
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2009 Oct 08
0
zfs send/receive performance concern
I am running zfs send/receive on a ~1.2Tb zfs spread across 10x200Gb LUNs.
Has copied only 650Gb in ~42Hrs. Source pool and destination pool are from
the same storage sub system. Last time when ran, took ~20Hrs.
Something is terribly wrong here. What do i need to look to figure out the
reason?
ran zpool iostat and iostat on the given pool for some clue. but still in a
state of confusion now.
2008 Oct 11
5
questions about replacing a raidz2 vdev disk with a larger one
I''d like to replace/upgrade two 500GB disks in RaidZ2 vdev with 1TB disks, but I have some preliminary questions/concerns before trying ''zfs replace dpool ?''
Will ZFS permit this replacement?
Will ZFS use the extra space in a heterogeneous RaidZ2 vdev, or is the size limited by the smallest disk in the vdev?
Thanks in advance,
Vizzini
The system is currently running
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello,
Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation.
I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains 2 partitions (p1=32GB, p2=1800 GB) and
- p1 is used as part of a zfs mirror of rpool
2008 Apr 16
0
ZFS raidz1 replacing failing disk
I''m having a serious problem with a customer running a T2000 with ZFS configured as raidz1 with 4 disks, no spare.
The machine is mostly a cyrus imap server and web application server to run the ajax app to email.
Yesterday we had a heavy slow down.
Tomcat runs smoothly, but the imap access is very slow, also through a direct imap client runnining on LAN PCs.
We figured out that the 4th
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can''t retrospectively document all I did, but I am
sure I recreated the boot-archive, ran devfsadm -C and deleted
/etc/zfs/zpool.cache several times.
Now zpool status is referring to a
2007 Jan 17
3
Implementation Question
Why does zfs define raidz/raidz2/mirror/stripe at the pool level instead of the filesystem/volume level?
A sample use case: two filesystems in a eight disk pool. The first filesystems is a stripe across four mirrors. The second filesystems is a raidz2. Both utilizing the free space in the 8 disk pool as needed.
Thanks in advance...
This message posted from opensolaris.org
2009 Feb 12
2
Solaris and zfs versions
We''ve been experimenting with zfs on OpenSolaris 2008.11. We created a
pool in OpenSolaris and filled it with data. Then we wanted to move it
to a production Solaris 10 machine (generic_137138_09) so I "zpool
exported" in OpenSolaris, moved the storage, and "zpool imported" in
Solaris 10. We got:
Cannot import ''deadpool'': pool is formatted
2010 Feb 10
5
zfs receive : is this expected ?
amber ~ # zpool list data
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 930G 295G 635G 31% 1.00x ONLINE -
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination ''ezdata'' exists
must specify -F to overwrite it
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data
cannot receive:
2007 Jul 14
3
zfs list hangs if zfs send is killed (leaving zfs receive process)
I was in the process of doing a large zfs send | zfs receive when I decided that I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn''t want to die... In the meantime my zfs list command just hangs.
Here is the tail end of the truss output from a "truss zfs list":
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08043484) = 0
ioctl(3,
2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got
a full Bash trace of it, so I know exactly what was done.
There are a moderate number of snapshots on the zp1 pool, and I''m
intending to replicate the whole thing into the backup pool.
After housekeeping, I take make a current snapshot on the data pool (zp1).
Since this is a new full backup, I then
2009 Sep 10
3
zfs send of a cloned zvol
Hi,
I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap).
If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol?
Best regards.
Maurilio.
--
This message posted from
2006 Jun 20
3
nevada_41 and zfs disk partition
I just installed build 41 of Nevada on a SunBlade 1500 with 2GB of ram. I wanted to check out zfs since the delay of S10U2 I really could not wait any longer :)
I installed it on my system and created a zpool out of an approximately 40GB disk slice. I then wanted to build a version of thunderbird that contains a local patch that we like. So I download the source tar ball. I try to untar it on the
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi,
I want to move all the ZFS fs from one pool to another, but I don''t want
to "gain" an extra level in the folder structure on the target pool.
On the source zpool I used zfs snapshot -r tank at moveTank on the root fs
and I got a new snapshot in all sub fs, as expected.
Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/...
which would place all zfs fs
2007 Nov 15
1
ZFS snapshot send/receive via intermediate device
Hey folks,
I have no knowledge at all about how streams work in Solaris, so this might have a simple answer, or be completely impossible. Unfortunately I''m a windows admin so haven''t a clue which :)
We''re looking at rolling out a couple of ZFS servers on our network, and instead of tapes we''re considering using off-site NAS boxes for backups. We think
2012 Nov 11
0
Expanding a ZFS pool disk in Solaris 10 on VMWare (or other expandable storage technology)
Hello all,
This is not so much a question but rather a "how-to" for posterity.
Comments and possible fixes are welcome, though.
I''m toying (for work) with a Solaris 10 VM, and it has a dedicated
virtual HDD for data and zones. The template VM had a 20Gb disk,
but a particular application needs more. I hoped ZFS autoexpand
would do the trick transparently, but it turned out
2010 Apr 27
2
ZFS version information changes (heads up)
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
are no longer redirected to the correct location after April 30, 2010.
Description
The
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3):
zfs snapshot -r tank@`date +%F_%T`
then trid to send it to the remote host:
zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs
receive tank/tankbackup''
but got the error "zfs: command not found" since user is not superuser, even
though it is in the root group.
I found
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data
pool zp1, and it seems to have hung.
Any suggestions for additional data gathering?
-bash-3.2$ zpool status zp1
pool: zp1
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ''zpool