similar to: creating zvols in a non-global zone (or ''Doctor, it hurts when I do this'')

Displaying 20 results from an estimated 2000 matches similar to: "creating zvols in a non-global zone (or ''Doctor, it hurts when I do this'')"

2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2009 Jun 16
3
Adding zvols to a DomU
I''m trying to add extra zvols to a Solaris10 DomU, sv_113 Dom0 I can use virsh attach-disk <name> <zvol> hdb --device phy to attach the zvol as c0d1. Replacing hdb by hdd gives me c1d1 but then that is it. Being able to attach several more zvols would be nice but even being able to get at c1d0 would be useful Am I missing something or can I only attach to hda/hdb/hdd?
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP. Here is the error message when I try to create my pool: bash-3.00# zpool create tank /dev/dsk/vpath1a warning: device in use checking failed: No such device internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c bash-3.00# zpool create tank /dev/dsk/vpath1c cannot open
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2007 Aug 23
1
EOF broken on zvol raw devices?
> I tried to copy a 8GB Xen domU disk image from a zvol device > to an image file on an ufs filesystem, and was surprised that > reading from the zvol character device doesn''t detect "EOF". > > On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this: > > # zfs create -V 1440k tank/floppy-img > > # dd if=/dev/zvol/dsk/tank/floppy-img
2006 May 16
8
ZFS recovery from a disk losing power
running b37 on amd64. after removing power from a disk configured as a mirror, 10 minutes has passed and ZFS has still not offlined it. # zpool status tank pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear
2009 Mar 03
0
HEADS UP: iSCSI, zvol, and vdisk format support
Looks like I forgot to send this to the alias... MRJ ---- A quick heads up for the iscsi, zvol, and vdisk format putback I did into the 3.3 development gate. ISCSI ===== With this putback, you can now install onto and boot a guest using iSCSI disk(s). The iscsi formats supported are: phy:iscsi:/alias/<iscsi-alias> phy:iscsi:/static/<server IP>/<lun>/<target
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2006 Aug 04
11
Assertion raised during zfs share?
Working to get ZFS to run on a minimal Solaris 10 U2 configuration. In this scenario, ZFS is included the miniroot which is booted into RAM. When trying to share one of the filesystems, an assertion is raised - see below. If the version of source on OpenSolaris.org matches Solaris 10 U2, then it looks like it''s associated with a popen of /usr/sbin/share. Can anyone shed any
2009 Mar 09
1
Other zvols for swap and dump?
Can you use a different zvol for dump and swap rather than using the swap and dump zvol created by liveupgrade? Casper
2008 Feb 07
0
getting and setting properties on zvols from kernel
Hi Is there an interface to get/set properties for a zvol (given its /dev/ pathname) from the solaris kernel ? Also is there a way to register callbacks to get notified in a kernel module when a property changes for a zvol ? Thanks Sumit -- This messages posted from opensolaris.org
2009 Nov 20
1
Using local disk for cache on an iSCSI zvol...
I''m just wondering if anyone has tried this, and what the performance has been like. Scenario: I''ve got a bunch of v20z machines, with 2 disks. One has the OS on it, and the other is free. As these are disposable client machines, I''m not going to mirror the OS disk. I have a disk server with a striped mirror zpool, carved into a bunch of zvols, each exported via
2009 Sep 10
3
zfs send of a cloned zvol
Hi, I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap). If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol? Best regards. Maurilio. -- This message posted from
2008 Sep 25
4
Help with b97 HVM zvol-backed DomU disk performance
Hi Folks, I was wondering if anyone has an pointers/suggestions on how I might increase disk performance of a HVM zvol-backed DomU? - this is my first DomU, so hopefully its something obvious Running bonnie++ shows the DomU''s performance to be 3 orders of magnitude worse than Dom0''s, which itself is half as good as when not running xVM at all (see bottom for bonnie++ results)
2005 Nov 20
11
NFS question (and Best Practices)
I saw in another post that a best practices doc will be coming, but I figured I would try to get this working. I''m trying to understand why zfs uses so many "zfs create" so I can use it better. What makes sense is that each zfs fs can have it''s own options (compression, nfs, atime, quota, etc). I really love this because it is so tuneable -- compression on these
2006 Nov 20
1
Temporary mount Properties, small bug?
Hi, Just playing with zfs and the admin manual ... # mkfile 100m /export/zfs/disk1 # zpool create data /export/zfs/disk1 # zfs create data/users # zfs mount -o remount,noatime data/users # zfs get all data/users NAME PROPERTY VALUE SOURCE data/users type filesystem - data/users creation Mon Nov 20 11:25 2006
2007 Apr 26
7
device name changing
Hi. If I create a zpool with the following command: zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7 and after a reboot the device names for some reason are changed so da2 and da5 are swapped, either by altering the LUN setting on the storage or by switching cables/swapping disks etc.? How will zfs handle that? Will it simply acknowledge that all devices are present and the pool is
2006 Aug 01
5
ZFS, block device and Xen?
Hi There, I looked at the ZFS admin guide in attempt to find a way to leverage ZFS capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen domU file systems that are not ZFS. Couldn''t find an answer whether ZFS could be used only as a "regular" volume manager to create logical volumes for UFS or even a Linux ext2fs, with ideally, the ability to create