similar to: Best way to create a vdisk on zpool/zfs

Displaying 20 results from an estimated 8000 matches similar to: "Best way to create a vdisk on zpool/zfs"

2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block device and then use `mkfs -F pcfs'' to create a FAT filesystem on it. Can''t I do the same thing by first creating a zvol and then creating a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the zvol just another block device? -- -Gary Mills- -Unix Group-
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2009 Nov 20
3
OS b127, fail to create vdisk
Here is the error I get, Unable to complete install ''exceptions.RuntimeError Error creating vdisk /var/lib/xen/images/centos.img Traceback (most recent call last): File "/var/tmp/pkgbuild-gbuild/SUNWvirt-manager-0.6.1-build/virtManager/create.py", line 730, in do_install File "/export/builds/xvm_127///proto/install/usr/lib Please could you help? I would like to
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size). Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello, I have a problem confusing me. I hope someone can help me with it. I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines. Commands (for completion): [i]zfs create rpool/vms[/i] [i]zfs create rpool/vms/vm1[/i] [i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i] This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2009 Sep 10
3
zfs send of a cloned zvol
Hi, I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap). If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol? Best regards. Maurilio. -- This message posted from
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you have several zfs filesystems under one top level directory like this: rpool rpool/ROOT/osol-112 rpool/export rpool/export/home rpool/export/home/reader you could do a shapshot encompassing everything below zpool instead of having to do it at each level. (Maybe it was in a dream...)
2009 Aug 26
3
Import vmware vmdk into xVM (osol-2009.06)
Good afternoon, I was wondering if anyone has any in site as to how to import a VMware vmdk into xVM on OpenSolaris 2009.06 (xVM 3.1). I have a VMware VM created on VMware Server 2.0 and would like to move it over to this xVM server. I appreciate any advice anyone may have. Cheers, -Chris -- This message posted from opensolaris.org
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2006 Nov 01
56
ZFS/iSCSI target integration
Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I''ll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8<--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2012 Nov 20
6
zvol wrapped in a vmdk by Virtual Box and double writes?
Hi folks, (Long time no post...) Only starting to get into this one, so apologies if I''m light on detail, but... I have a shiny SSD I''m using to help make some VirtualBox stuff I''m doing go fast. I have a 240GB Intel 520 series jobbie. Nice. I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb. As part of my work, I have used it both as a RAW
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard, Richard Robinson wrote: > I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting > > swapfile dev swaplo blocks free > /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296 > > and I was trying to follow the directions for increasing swap here: >
2009 Jul 27
10
sam-fs on zfs-pool
Hi list, I''ve did some tests and run into a very strange situation.. I created a zvol using "zfs create -V" and initialize an sam-filesystem on this zvol. After that I restored some testdata using a dump from another system. So far so good. After some big troubles I found out that releasing files in the sam-filesystem doesn''t create space on the underlying zvol.
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a