similar to: Help with b97 HVM zvol-backed DomU disk performance

Displaying 20 results from an estimated 9000 matches similar to: "Help with b97 HVM zvol-backed DomU disk performance"

2008 Aug 01
5
how to configure ne2k emulation?
I''ve searched all night long on google, discussions, threads, and the xen wiki... but I can''t find how to do this: vif = [''type=ioemu,mac=XX:XX:XX:XX:XX:XX,model=ne2k_pci''] where should I put this? I have sxce94, dom0 works nicely, and domUs too, but I''d like to select ne2k for an hvm openbsd dom0 that I''ve created because I''ve read
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a
2005 Oct 01
3
yum can't find xen package?
I just installed CentOS 4.1 for the first time - I did a "minimal install"... Now I'm trying to install Xen - according to several online resources I should just be able to run `yum install xen`, but yum never finds the package - though tcpdump shows http traffic going to lists.centos.org: [root at centos ~]# yum install xen Setting up Install Process Setting up Repos update
2010 Jan 17
2
error: failed to serialize S-Expr
Last night I upgraded from b127 to b130 and now I can''t mark autostart domains: # virsh autostart lunar-1 error: Failed to mark domain lunar-1 as autostarted error: failed to serialize S-Expr: (domain (on_crash restart) (uuid 91c21040-2098-bc0d-b401-3908f3a21667) (bootloader_args) (vcpus 1) (name lunar-1) (on_poweroff destroy) (on_reboot restart) (cpus ( Searching the archives
2013 Nov 22
1
FreeBSD 10-BETA3 - zfs clone of zvol snapshot is not created
Hi, am I doing something wrong, ZFS does not support that or there is a bug that zvol clone does not show up under /dev/zvol after creating it from other zvol snapshot? # zfs list -t all | grep local local 136G 76.8G 144K none local/home 117G 76.8G 117G /home local/vm 18.4G 76.8G 144K
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2007 Aug 23
1
EOF broken on zvol raw devices?
> I tried to copy a 8GB Xen domU disk image from a zvol device > to an image file on an ufs filesystem, and was surprised that > reading from the zvol character device doesn''t detect "EOF". > > On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this: > > # zfs create -V 1440k tank/floppy-img > > # dd if=/dev/zvol/dsk/tank/floppy-img
2009 Nov 13
2
xend:default won''t start due to /usr/bin/kstat not locating autosplit.ix
Short:     Which package do I need to install to get "auto/I18N/Langinfo/autosplit.ix"? Long: The problem I have was discussed almost a year ago (see this thread), but the resolution was not complete... I''m using Mark Johnson''s slim.py script (package list below), against repo=http://pkg.opensolaris.org/dev, followed by `pkg install xvm-gui` and `svcadm enable
2007 Sep 19
8
ZFS Solaris 10u5 Proposed Changes
ZFS Fans, Here''s a list of features that we are proposing for Solaris 10u5. Keep in mind that this is subject to change. Features: PSARC 2007/142 zfs rename -r PSARC 2007/171 ZFS Separate Intent Log PSARC 2007/197 ZFS hotplug PSARC 2007/199 zfs {create,clone,rename} -p PSARC 2007/283 FMA for ZFS Phase 2 PSARC/2006/465 ZFS Delegated Administration PSARC/2006/577 zpool property to
2009 Mar 03
0
HEADS UP: iSCSI, zvol, and vdisk format support
Looks like I forgot to send this to the alias... MRJ ---- A quick heads up for the iscsi, zvol, and vdisk format putback I did into the 3.3 development gate. ISCSI ===== With this putback, you can now install onto and boot a guest using iSCSI disk(s). The iscsi formats supported are: phy:iscsi:/alias/<iscsi-alias> phy:iscsi:/static/<server IP>/<lun>/<target
2009 Sep 10
3
zfs send of a cloned zvol
Hi, I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap). If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol? Best regards. Maurilio. -- This message posted from
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will
2009 May 20
1
how to reliably determine what is locking up my zvol?
-bash-3.2# zpool export exchbk cannot remove device links for ''exchbk/exchbk-2'': dataset is busy this is a zvol used for a comstar iscsi backend: -bash-3.2# stmfadm list-lu -v LU Name: 600144F0EAC0090000004A0A4F410001 Operational Status: Offline Provider Name : sbd Alias : /dev/zvol/rdsk/exchbk/exchbk-1 View Entry Count : 1 LU Name:
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2012 Aug 31
0
oops with btrfs on zvol
Hi, I''m experimenting with btrfs on top of zvol block device (using zfsonlinux), and got oops on a simple mount test. While I''m sure that zfsonlinux is somehow also at fault here (since the same test with zram works fine), the oops only shows things btrfs-related without any usable mention of zfs/zvol. Could anyone help me interpret the kernel logs, which btrfs-zvol interaction
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices in another pool I don''t think this is related, since I the pools are ofline pending access to the volumes. I tried running find /dev/zvol/dsk/poolname -type f and here is the stack, hopefully this someone a hint at what the issue is, I have
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp clients for about 200 users. The logical drive was created with the following settings: RAID = 5 stripe size = 32kb write policy = wrback read policy =
2007 Jan 26
10
UFS on zvol: volblocksize and maxcontig
Hi all! First off, if this has been discussed, please point me in that direction. I have searched high and low and really can''t find much info on the subject. We have a large-ish (200gb) UFS file system on a Sun Enterprise 250 that is being shared with samba (lots of files, mostly random IO). OS is Solaris 10u3. Disk set is 7x36gb 10k scsi, 4 internal 3 external. For several