Displaying 20 results from an estimated 7000 matches similar to: "virsh troubling zfs!?"
2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block
device and then use `mkfs -F pcfs'' to create a FAT filesystem on it.
Can''t I do the same thing by first creating a zvol and then creating
a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the
zvol just another block device?
--
-Gary Mills- -Unix Group-
2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard,
Richard Robinson wrote:
> I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting
>
> swapfile dev swaplo blocks free
> /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296
>
> and I was trying to follow the directions for increasing swap here:
>
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn''t implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?
Doing following:
o added new virtual HDD (it becomes
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team,
**Please respond to me and my coworker listed in the Cc, since neither
one of us are on this alias**
QUICK PROBLEM DESCRIPTION:
Cu created a dataset which contains all the zvols for a particular
zone. The zone is then given access to all the zvols in the dataset
using a match statement in the zoneconfig (see long problem description
for details). After the initial boot of the zone
2008 Jul 26
1
Expanding a root pool
I''m attempting to expand a root pool for a VMware VM that is on an 8GB virtual disk. I mirrored it to a 20GB disk and detached the 8GB disk. I did "installgrub" to install grub onto the second virtual disk, but I get a kernel panic when booting. Is there an extra step I need to perform to get this working?
Basically I did this:
1. Created a new 20GB virtual disk.
2. Booted
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2009 Nov 09
1
CentOS 5.4 x86_64 domU start fails
Hi and hello,
The error:
libvirtError: virtDomainCreate() failed POST operation failed: (xend.err "(2,''Invalid Kernel'', ''xc_dom_find_loader : no loader found \\n'')")
My Machine:
SunOS katecholamin 5.11 snv_111b i86pc i386 i86xpv Solaris
I created a disk:
pfexec zfs create -V 10G rpool/vms/centos/centos-dsk
Mounted my .iso and shared it via nfs:
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system) this can make a
pleasant improvement over a HDD-only implementation."
For the upcoming
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody
I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size).
Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.
2010 Jan 07
1
Trying to get Xen going with svn_130...
I''m trying to get linux domU going with opensolaris ''130.
The CPU is a plain P4 3.2Ghz (no hardware Virt) but all I''m trying is --paravirt
Here''s the command line I''m trying.....
virt-install --paravirt --name=dom1 --ram=1024 --vnc \
--os-type=linux --os-variant=fedora8 \
--network bridge \
--file /dev/zvol/dsk/rpool/dom1 \
--location
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it)
I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10.
Originally what I did was:
zpool attach -f rpool c0t0d0 c0t2d0.
Then I did an installboot on c0t2d0s0.
Didnt work. I was not able to boot from my second drive (c0t2d0).
I cannot remember
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello,
I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols.
# first test is a
2008 Jul 19
2
problems with virt-install on snv_93
Hi,
I try to start Solaris10 HVM DomU on SNV_93 dom0 running amd athlon x2
and 8gb ram.
When I try to create a domU i get:
bash-3.2# virt-install -n sol10 --hvm -r 1024 --vnc -f
/dev/zvol/dsk/rpool/sol10 -c /rpool/pub/sol-10-u5-ga-x86-dvd.iso
Starting install...
virDomainCreateLinux() failed POST operation failed: (xend.err ''Device
0 (vif) could not be connected. Backend device not
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2011 Feb 23
1
Using Solaris iSCSI target in VirtualBox iSCSI Initiator
Hello,
I?m using ZFS to export some iscsi targets for the virtual box iscsi
initiator.
It works ok if I try to install the guest OS manually.
However, I?d like to be able to import my already prepared guest os vdi
images into the iscsi devices but I can?t figure out how to do it.
Each time I tried, I cannot boot.
It only works if I save the manually installed guest os and re-instate the
same
2008 Jun 18
4
Probem creating a Windows XP HVM with snv_90
Hi all,
I''ve just upgraded my OpenSolaris 2008.05 workstation to snv_90 after unsucessfully trying to get virt-install.sh to run on snv_86 and now have (I think) a more serious problem. In the xend-debug.log file I keep getting the following error repeated many times :
Failed allocation for dom 8: 1 extents of order 9
ERROR Internal error: Cannot allocate more 2M pages for HVM guest.
2008 Nov 08
7
Paravirtualized Solaris Update 6 (10/08)?
Gurus;
I''ve been running Solaris 10 on a HVM domain on my machine (running SXCE
snv_93 x86) for some time now.
Now that Solaris 10 Update 6 (10/08) has been released, I tried creating
a Paravirtualized Guest domain but got the same error message I got
previously...
# virt-install -n sol10 -p -r 1560 --nographics -f
/dev/zvol/dsk/rpool/sol10 -l /stage/sol-10-u6->
Starting
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.>
I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40:
Feb 8