similar to: zfs streams

Displaying 20 results from an estimated 4000 matches similar to: "zfs streams"

2008 Nov 08
7
Paravirtualized Solaris Update 6 (10/08)?
Gurus; I''ve been running Solaris 10 on a HVM domain on my machine (running SXCE snv_93 x86) for some time now. Now that Solaris 10 Update 6 (10/08) has been released, I tried creating a Paravirtualized Guest domain but got the same error message I got previously... # virt-install -n sol10 -p -r 1560 --nographics -f /dev/zvol/dsk/rpool/sol10 -l /stage/sol-10-u6-> Starting
2010 Mar 02
11
Expand zpool capacity
Hello, Experts. I''ve got a problem. I''m trying to expand my main zpool (rpool), but don''t know how to do that. (i''m 100% newbie in non-windows world) I use Osol under Vmware on Windows. I had a pretty small vhdd -> only 12gb. Yesterday i decided to expand my virtual drive to 20gb. (After several tries to upgrade the OS to a newest dev-releases and
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all, I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny. I installed default snv_129, installed guest additions -> reboot, set
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading; Documents = 147MB Videos = 11G Software= 1.4G By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated; NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE - It doesn''t look like
2008 Nov 16
4
[ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM
I''ve tried using S10 U6 to reinstall the boot file (instead of U5) over jumpstart as its a ldom, noticed a another error. Boot device: /virtual-devices at 100/channel-devices at 200/network at 0 File and args: -s Requesting Internet Address for 0:14:4f:f9:84:f3 boot: cannot open kernel/sparcv9/unix Enter filename [kernel/sparcv9/unix]: Has anyone seen this error on U6 jumpstart or is
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote: > > I would suggest that you follow my recipe: not check the boot-archive > during a reboot. And then report back. (I''m assuming that that will take > several weeks) > We are back at square one; or, at the subject line. I did a zpool status -v, everything was hunky dory. Next, a power failure, 2 hours later, and this is what zpool status
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3): zfs snapshot -r tank@`date +%F_%T` then trid to send it to the remote host: zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs receive tank/tankbackup'' but got the error "zfs: command not found" since user is not superuser, even though it is in the root group. I found
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all, I''d like to report a tricky situation and a workaround I''ve found useful - hope this helps someone in similar situations. To cut the long story short, I could not properly mount some datasets from a readonly pool, which had a non-"legacy" mountpoint attribute value set, but the mountpoint was not available (directory absent or not empty). In this case
2012 Dec 20
3
Pool performance when nearly full
Hi I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking (and I''d check the ZFS wikis but the websites are down at the moment). Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ? zfs list: used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4% zpool iostat: used
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a single rpool on cXtYd0s2, if I can even do that, and improved performance compared to having two
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all, I have a 5 drive RAIDZ volume with data that I''d like to recover. The long story runs roughly: 1) The volume was running fine under FreeBSD on motherboard SATA controllers. 2) Two drives were moved to a HP P411 SAS/SATA controller 3) I *think* the HP controllers wrote some volume information to the end of each disk (hence no more ZFS labels 2,3) 4) In its "auto
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris