search for: vhci

Displaying 20 results from an estimated 24 matches for "vhci".

Did you mean: ehci
2006 Oct 31
0
6349232 vhci cache may not contain iscsi device information when cache is rebuilt
Author: ramat Repository: /hg/zfs-crypto/gate Revision: 4b26d77cdea7130b1da9746af6ad53939d24d297 Log message: 6349232 vhci cache may not contain iscsi device information when cache is rebuilt Files: update: usr/src/uts/common/os/sunmdi.c update: usr/src/uts/common/sys/mdi_impldefs.h
2010 Jun 21
0
Seriously degraded SAS multipathing performance
...e, asvc_t: 99 ms With two paths connected, round-robin disabled, pin half the drives to one path (path A), the other half of the drives to the other path (path B): 22 drives: 2.2 GB/s sustained write (1.1 GB/s per path), asvc_t: 12 ms Multipath support info: mpathadm show mpath-support libmpscsi_vhci.so mpath-support: libmpscsi_vhci.so Vendor: Sun Microsystems Driver Name: scsi_vhci Default Load Balance: round-robin Supported Load Balance Types: round-robin logical-block Allows To Activate Target Port Group Acces...
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB> /scsi_vhci/ssd at g60a98000433469764e4a413571444b63 2. c4t60A98000433469764E4A41357149432Fd0 <NETAPP-LUN-0.2-50.00GB> /scsi_vhci/ssd at g60a9...
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2014 Jun 15
2
Re: ERROR: Domain not found: no domain with matching name 'ubuntu'
...iSCSI session parameters Bluetooth(R) options: -bt hci,null dumb bluetooth HCI - doesn't respond to commands -bt hci,host[:id] use host's HCI with the given name -bt hci[,vlan=n] emulate a standard HCI in virtual scatternet 'n' -bt vhci[,vlan=n] add host computer to virtual scatternet 'n' using VHCI -bt device:dev[,vlan=n] emulate a bluetooth device 'dev' in scatternet 'n' Linux/Multiboot boot specific: -kernel bzImage use 'bzImage' as kernel image -append cmdline us...
2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2014 Jun 12
3
ERROR: Domain not found: no domain with matching name 'ubuntu'
Hi guys, I am new to QEMU-KVM, libvmi and libvirt stuff. Libvmi uses libvirt. I am trying to to run process-list example of libvmi and getting error as below. It seems that this error may be due to libvirt as it is not able to find domain. I seek your kind help on below error: spanhal1@seclab2:~/KVMModule/libvmi-0.10.1$ sudo ./examples/process-list ubuntu libvir: QEMU error : Domain not found:
2013 May 09
1
Bug#707434: xen: FTBFS: vl.c:1575: undefined reference to `timer_create'
...scsi-generic.o > CC usb.o > CC usb-hub.o > CC usb-linux.o > CC usb-hid.o > CC usb-msd.o > CC usb-wacom.o > CC usb-serial.o > CC usb-net.o > CC sd.o > CC ssi-sd.o > CC bt.o > CC bt-host.o > CC bt-vhci.o > CC bt-l2cap.o > CC bt-sdp.o > CC bt-hci.o > CC bt-hid.o > CC usb-bt.o > CC buffered_file.o > CC migration.o > CC migration-tcp.o > CC net.o > CC qemu-sockets.o > CC qemu-char.o > /?PKGBUILDDIR?/debian/bui...
2020 Sep 03
3
Error while loading shared libraries: libsbz.so
Hi, I have a KVM host running ubuntu 18.04 with libguestfs-tools version 1.36.13-1ubuntu3.3 installed from the Ubuntu's repo and when I try to use virt-cat for example on a VM it fails with: libguestfs: error: appliance closed the connection unexpectedly. > libguestfs: error: guestfs_launch failed. After doing "export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1" and running the
2017 Dec 11
3
Libguestfs Hangs on CentOS 7.4
...------- 1 root root 7, 1 Dec 11 14:02 vcs1 crw------- 1 root root 7, 128 Dec 11 14:02 vcsa crw------- 1 root root 7, 129 Dec 11 14:02 vcsa1 drwxr-xr-x 2 root root 60 Dec 11 14:02 vfio crw------- 1 root root 10, 63 Dec 11 14:02 vga_arbiter crw------- 1 root root 10, 137 Dec 11 14:02 vhci crw------- 1 root root 10, 238 Dec 11 14:02 vhost-net drwxr-xr-x 2 root root 60 Dec 11 14:02 virtio-ports crw------- 1 root root 245, 1 Dec 11 14:02 vport1p1 crw-rw-rw- 1 root root 1, 5 Dec 11 14:02 zero /dev/block: total 0 lrwxrwxrwx 1 root root 6 Dec 11 14:02 8:0 -> ../sda lrwxrw...
2020 Sep 03
0
Re: Error while loading shared libraries: libsbz.so
...cs > crw------- 1 0 0 7, 1 Sep 3 10:30 vcs1 > crw------- 1 0 0 7, 128 Sep 3 10:30 vcsa > crw------- 1 0 0 7, 129 Sep 3 10:30 vcsa1 > drwxr-xr-x 2 0 0 60 Sep 3 10:30 vfio > crw------- 1 0 0 10, 63 Sep 3 10:30 vga_arbiter > crw------- 1 0 0 10, 137 Sep 3 10:30 vhci > crw------- 1 0 0 10, 238 Sep 3 10:30 vhost-net > crw------- 1 0 0 10, 241 Sep 3 10:30 vhost-vsock > drwxr-xr-x 2 0 0 60 Sep 3 10:30 virtio-ports > crw------- 1 0 0 245, 1 Sep 3 10:30 vport2p1 > crw-rw-rw- 1 0 0 1, 5 Sep 3 10:30 zero > > /dev/block: > to...
2020 Sep 03
1
Re: Error while loading shared libraries: libsbz.so
...7, 1 Sep 3 10:30 vcs1 > > crw------- 1 0 0 7, 128 Sep 3 10:30 vcsa > > crw------- 1 0 0 7, 129 Sep 3 10:30 vcsa1 > > drwxr-xr-x 2 0 0 60 Sep 3 10:30 vfio > > crw------- 1 0 0 10, 63 Sep 3 10:30 vga_arbiter > > crw------- 1 0 0 10, 137 Sep 3 10:30 vhci > > crw------- 1 0 0 10, 238 Sep 3 10:30 vhost-net > > crw------- 1 0 0 10, 241 Sep 3 10:30 vhost-vsock > > drwxr-xr-x 2 0 0 60 Sep 3 10:30 virtio-ports > > crw------- 1 0 0 245, 1 Sep 3 10:30 vport2p1 > > crw-rw-rw- 1 0 0 1, 5 Sep 3 10:30 zero >...
2011 Feb 26
1
make world error
...05.o CC lm832x.o CC scsi-disk.o CC cdrom.o CC scsi-generic.o CC usb.o CC usb-hub.o CC usb-linux.o CC usb-hid.o CC usb-msd.o CC usb-wacom.o CC usb-serial.o CC usb-net.o CC sd.o CC ssi-sd.o CC bt.o CC bt-host.o CC bt-vhci.o CC bt-l2cap.o CC bt-sdp.o CC bt-hci.o CC bt-hid.o CC usb-bt.o CC buffered_file.o CC migration.o CC migration-tcp.o CC net.o CC qemu-sockets.o CC qemu-char.o qemu-char.c:1123:7: warning: "CONFIG_STUBDOM" is not defined CC net-ch...
2014 Dec 11
1
Inspect_os() error
...- 1 0 0 7, 129 Dec 11 21:57 vcsa1 brw------- 1 0 0 253, 0 Dec 11 21:57 vda brw------- 1 0 0 253, 1 Dec 11 21:57 vda1 brw------- 1 0 0 253, 2 Dec 11 21:57 vda2 brw------- 1 0 0 253, 16 Dec 11 21:57 vdb crw------- 1 0 0 10, 63 Dec 11 21:57 vga_arbiter crw------T 1 0 0 10, 137 Dec 11 21:57 vhci crw------T 1 0 0 10, 238 Dec 11 21:57 vhost-net drwxr-xr-x 2 0 0 60 Dec 11 21:57 virtio-ports crw------- 1 0 0 251, 1 Dec 11 21:57 vport2p1 crw-rw-rw- 1 0 0 1, 5 Dec 11 21:57 zero /dev/block: total 0 lrwxrwxrwx 1 0 0 7 Dec 11 21:57 1:0 -> ../ram0 lrwxrwxrwx 1 0 0 7 Dec 11 21:57 1:1...
2016 Feb 27
2
"guestmount --rw" fails but "guestmount --ro" succeeds on Ubuntu 14.04
...-rw- 1 0 0 1, 9 Feb 27 02:31 urandom crw------- 1 0 0 7, 0 Feb 27 02:31 vcs crw------- 1 0 0 7, 1 Feb 27 02:31 vcs1 crw------- 1 0 0 7, 128 Feb 27 02:31 vcsa crw------- 1 0 0 7, 129 Feb 27 02:31 vcsa1 crw------ 1 0 0 10, 63 Feb 27 02:31 vga_arbiter crw------- 1 0 0 10, 137 Feb 27 02:31 vhci crw------- 1 0 0 10, 238 Feb 27 02:31 vhost-net drwxr-xr-x 2 0 0 60 Feb 27 02:31 virtio-ports crw------- 1 0 0 251, 1 Feb 27 02:31 vport1p1 crw-rw-rw- 1 0 0 1, 5 Feb 27 02:31 zero /dev/block: total 0 lrwxrwxrwx 1 0 0 7 Feb 27 02:31 1:0 -> ../ram0 lrwxrwxrwx 1 0 0 7 Feb 27 02:31 1:1 -&g...
2015 May 28
3
Re: Concurrent scanning of same disk
...1 0 0 7, 0 May 28 06:36 vcs crw------- 1 0 0 7, 1 May 28 06:36 vcs1 crw------- 1 0 0 7, 128 May 28 06:36 vcsa crw------- 1 0 0 7, 129 May 28 06:36 vcsa1 drwxr-xr-x 2 0 0 60 May 28 06:36 vfio crw------- 1 0 0 10, 63 May 28 06:36 vga_arbiter crw------- 1 0 0 10, 137 May 28 06:36 vhci crw------- 1 0 0 10, 238 May 28 06:36 vhost-net drwxr-xr-x 2 0 0 60 May 28 06:36 virtio-ports crw------- 1 0 0 249, 1 May 28 06:36 vport1p1 crw-rw-rw- 1 0 0 1, 5 May 28 06:36 zero /dev/block: total 0 lrwxrwxrwx 1 0 0 6 May 28 06:36 8:0 -> ../sda lrwxrwxrwx 1 0 0 7 May 28 06:36 8:1...
2013 Oct 06
40
[xen] double fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Greetings, I got the below dmesg and the first bad commit is commit cf39c8e5352b4fb9efedfe7e9acb566a85ed847c Merge: 3398d25 23b7eaf Author: Linus Torvalds <torvalds@linux-foundation.org> Date: Wed Sep 4 17:45:39 2013 -0700 Merge tag ''stable/for-linus-3.12-rc0-tag'' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull Xen updates from Konrad
2015 May 27
3
Concurrent scanning of same disk
Greetings, I am suffering of several weird errors which show randomly and make me suspect some concurrency issue. Libguestfs version is 1.28.1, linux kernel 3.16, libvirt 1.2.9 and qemu 2.1. What I'm trying to do is comparing the disk state at two different point of a guest execution. Disk snapshots are taken through libvirt in different moments (I am aware of caching issue), from such
2017 Dec 02
0
Re: [nbdkit PATCH] nbd: Fix memory leak
...------- 1 root root 7, 1 Dec 2 18:18 vcs1 crw------- 1 root root 7, 128 Dec 2 18:18 vcsa crw------- 1 root root 7, 129 Dec 2 18:18 vcsa1 drwxr-xr-x 2 root root 60 Dec 2 18:18 vfio crw------- 1 root root 10, 63 Dec 2 18:18 vga_arbiter crw------- 1 root root 10, 137 Dec 2 18:18 vhci crw------- 1 root root 10, 238 Dec 2 18:18 vhost-net crw------- 1 root root 10, 241 Dec 2 18:18 vhost-vsock drwxr-xr-x 2 root root 60 Dec 2 18:18 virtio-ports crw------- 1 root root 245, 1 Dec 2 18:18 vport2p1 crw-rw-rw- 1 root root 1, 5 Dec 2 18:18 zero /dev/block: total 0 lrwx...
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have