Displaying 20 results from an estimated 30000 matches similar to: "Hotplug for virtualization use case (Draft)"
2007 Apr 18
3
Use case on "Hotplug for Virtualization"
The latest draft of the Use Case on Hotplug for Virtualization is posted
at:
http://www.developer.osdl.org/maryedie/HOTPLUG/docs/Hotplug_virtual_use_
case.txt
It can also be accessed from the hotplug use cases page:
http://developer.osdl.org/dev/usecases/hotplug.shtml
Please remember to share your comments/errata/suggestions.
Thanks - Martine
2007 Apr 18
3
Use case on "Hotplug for Virtualization"
The latest draft of the Use Case on Hotplug for Virtualization is posted
at:
http://www.developer.osdl.org/maryedie/HOTPLUG/docs/Hotplug_virtual_use_
case.txt
It can also be accessed from the hotplug use cases page:
http://developer.osdl.org/dev/usecases/hotplug.shtml
Please remember to share your comments/errata/suggestions.
Thanks - Martine
2016 Aug 12
0
[PATCH 6/6] net: virtio-net: Convert to hotplug state machine
Install the callbacks via the state machine.
The driver supports multiple instances and therefore the new
cpuhp_state_add_instance_nocalls() infrastrucure is used. The driver
currently uses get_online_cpus() to avoid missing a CPU hotplug event
while invoking virtnet_set_affinity(). This could be avoided by using
cpuhp_state_add_instance() variant which holds the hotplug lock and
invokes callback
2016 Aug 12
0
[PATCH 6/6] net: virtio-net: Convert to hotplug state machine
Install the callbacks via the state machine.
The driver supports multiple instances and therefore the new
cpuhp_state_add_instance_nocalls() infrastrucure is used. The driver
currently uses get_online_cpus() to avoid missing a CPU hotplug event
while invoking virtnet_set_affinity(). This could be avoided by using
cpuhp_state_add_instance() variant which holds the hotplug lock and
invokes callback
2007 Aug 04
2
HotPlug, eSATA, and /media
Ok, got a quickie.
I have an eSATA drive, a 750GB Seagate in an eSATA external enclosure, and a
Silicon Image sil3132 ExpressCard controller for my laptop. The disk and
controller work great in CentOS 5 (or F7, for that matter), if I specifically
mount it.
This is not how I want to have to use this drive, however. I want to hotplug
it; that is, plug the controller into the laptop, and then
2009 Apr 03
0
the infamous Error: Device 768 (vbd) could not be connected. Hotplug scripts not working
I am trying various config files to get some kind of guest start and
they all seem to get stuck with the error,
Error: Device 768 (vbd) could not be connected. Hotplug scripts not working.
xen is 3.3.1 and Dom0 is 2.6.29.
Here''s an example of one such config that reproduces this error.
kernel = "/usr/lib/xen/boot/hvmloader"
builder = ''hvm''
device_model =
2012 Jan 03
1
Hotplug/hotadd functionality of libvirt?
Hello,
First of all, happy new year!
I am interested in the hot plugging facilities of libvirt, in particular
in what qemu calls 'hot add' of network interface cards. (And also the
reverse: hot unplugging/removing of NICs.)
I think I am overlooking something, but so far the best I have been able
to find is:
http://libvirt.org/sources/virshcmdref/html/
and it is not entirely clear
2020 Oct 07
0
Re: PCI Passthrough and Surprise Hotplug
On Mon, 5 Oct 2020 11:05:05 -0400
Marc Smith <msmith626@gmail.com> wrote:
> Hi,
>
> I'm using QEMU/KVM on RHEL (CentOS) 7.8.2003:
> # cat /etc/redhat-release
> CentOS Linux release 7.8.2003
>
> I'm passing an NVMe drive into a Linux KVM virtual machine (<type
> arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>) which has the
2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their
request_fn. I'd like to do the same in virtio_blk. After happily posting the
patch, Michael Tsirkin pointed out an issue that I can't explain. This may
affect existing block drivers that unlock the queue_lock too.
What happens when the block device is removed (hot unplug or kernel module
unloaded) while
2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their
request_fn. I'd like to do the same in virtio_blk. After happily posting the
patch, Michael Tsirkin pointed out an issue that I can't explain. This may
affect existing block drivers that unlock the queue_lock too.
What happens when the block device is removed (hot unplug or kernel module
unloaded) while
2010 Dec 15
9
[Bug 32406] New: nouveau fails to send hotplug event to ALSA hda hdmi audio
https://bugs.freedesktop.org/show_bug.cgi?id=32406
Summary: nouveau fails to send hotplug event to ALSA hda hdmi
audio
Product: xorg
Version: unspecified
Platform: Other
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
AssignedTo: nouveau
[virtio-dev] Re: [Qemu-devel] [PATCH] qemu: Introduce VIRTIO_NET_F_STANDBY feature bit to virtio_net
2018 Jun 15
0
[virtio-dev] Re: [Qemu-devel] [PATCH] qemu: Introduce VIRTIO_NET_F_STANDBY feature bit to virtio_net
On Fri, Jun 15, 2018 at 4:48 AM, Cornelia Huck <cohuck at redhat.com> wrote:
> On Thu, 14 Jun 2018 18:57:11 -0700
> Siwei Liu <loseweigh at gmail.com> wrote:
>
>> Thank you for sharing your thoughts, Cornelia. With questions below, I
>> think you raised really good points, some of which I don't have answer
>> yet and would also like to explore here.
2020 Feb 14
0
Re: can hotplug vcpus to running Windows 10 guest, but not unplug
On Fri, Feb 14, 2020 at 4:54 PM Lentes, Bernd <
bernd.lentes@helmholtz-muenchen.de> wrote:
>
> qemu-kvm-2.11.2-5.18.1.x86_64
>
> [...]
> I found a table on
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/cpu_hot_plug
> saying that hotplugging is possible but no hotunplugging.
> But i don't know how
2014 Aug 05
0
When I boot two virtio-rng devices, guest will hang
3.16 (guest hangs with two rng devices)
3.16 + quick fix (can startup with two rng devices) (hotplug issue 1 + hotplug issue 2 exist)
lates torvalds/linux.git + amit 4 patches (can startup with two rng devices) (only hotplug issue 2 exists)
However, the 4 patches also fixed the hang issue, the hotplug issue was fixed a little.
The hotplug issue is effected by the backend, or maybe it's not a
[virtio-dev] Re: [Qemu-devel] [PATCH] qemu: Introduce VIRTIO_NET_F_STANDBY feature bit to virtio_net
2018 Jun 19
0
[virtio-dev] Re: [Qemu-devel] [PATCH] qemu: Introduce VIRTIO_NET_F_STANDBY feature bit to virtio_net
On Tue, Jun 19, 2018 at 3:54 AM, Cornelia Huck <cohuck at redhat.com> wrote:
> On Fri, 15 Jun 2018 10:06:07 -0700
> Siwei Liu <loseweigh at gmail.com> wrote:
>
>> On Fri, Jun 15, 2018 at 4:48 AM, Cornelia Huck <cohuck at redhat.com> wrote:
>> > On Thu, 14 Jun 2018 18:57:11 -0700
>> > Siwei Liu <loseweigh at gmail.com> wrote:
>> >
2015 Oct 05
1
[PATCH] Fix shebang in perl scripts
Instead of hardcoding the location of perl (assuming it is installed in
/usr), use /usr/bin/env to run it, and thus picking it from $PATH.
This makes it possible to run these scripts also on installations with
perl in a different prefix than /usr.
Also, given that we want enable warnings on scripts, turn the -w
previously in shebang to explicit "use warnings;" in scripts which
2014 Oct 15
1
Fwd: Hotadd memory and hotplug cpu
Hello,
Does KVM support hotadd memory and hot-plug cpu?
I checked it using the virsh command but I can only increase the memory to
the maximum memory that is initially set
But setmaxmem command is failing. Which means this require a reboot of the
guest
# virsh setmaxmem 4 1048576
error: Unable to change MaxMemorySize
error: Requested operation is not valid: cannot resize the maximum memory
on
2009 Feb 09
3
hotplug vcpu problem to Centos 5.2 DomU
My dom0 is uing Centos 5.2 x64. I have just upgraded Xen from 3.3.0 to
3.3.1. After upgraded, I find that I cannot hotplug additional vcpu anymore.
I have a domU "linux1" which is a paravirtualized vm with centos 5.2. I try
to "xm vcpu-set linux1 4", it does not have any error message, but those
vcpu cannot be displayed. I also tried to add vcpu using "virsh setvcpus
2011 Jun 02
0
[PATCH] pci: Use pr_<level> and pr_fmt
Use the current logging message styles.
Convert the dbg and debug macros to alway have a terminating \n.
Remove err, warn, and info macros, use pr_<level>.
Add pr_fmt as appropriate.
Signed-off-by: Joe Perches <joe at perches.com>
---
drivers/pci/dmar.c | 116 ++++-----
drivers/pci/hotplug/acpi_pcihp.c | 36 ++--
drivers/pci/hotplug/acpiphp.h
2011 Jun 02
0
[PATCH] pci: Use pr_<level> and pr_fmt
Use the current logging message styles.
Convert the dbg and debug macros to alway have a terminating \n.
Remove err, warn, and info macros, use pr_<level>.
Add pr_fmt as appropriate.
Signed-off-by: Joe Perches <joe at perches.com>
---
drivers/pci/dmar.c | 116 ++++-----
drivers/pci/hotplug/acpi_pcihp.c | 36 ++--
drivers/pci/hotplug/acpiphp.h