Jes Sorensen
2008-Mar-31 12:01 UTC
[04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Zhang, Xiantao wrote:>>From 62895ff991d48398a77afdbf7f2bef127e802230 Mon Sep 17 00:00:00 2001 > From: Xiantao Zhang <xiantao.zhang at intel.com> > Date: Fri, 28 Mar 2008 09:49:57 +0800 > Subject: [PATCH] KVM: IA64: Add kvm arch-specific core code for > kvm/ia64. > > kvm_ia64.c is created to handle kvm ia64-specific core logic. > Signed-off-by: Xiantao Zhang <xiantao.zhang at intel.com>More comments, a couple of bugs in this one.> +#include <linux/module.h> > +#include <linux/vmalloc.h>Don't think you need vmalloc.h here.> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs > *regs) > +{[snip]> + copy_from_user(&vcpu->arch.guest, regs->saved_guest, > + sizeof(union context)); > + copy_from_user(vcpu + 1, regs->saved_stack + sizeof(struct > kvm_vcpu), > + IA64_STK_OFFSET - sizeof(struct kvm_vcpu));You need to check the return values from copy_from_user() here and deal with possible failure.> + vcpu->arch.apic = kzalloc(sizeof(struct kvm_lapic), GFP_KERNEL); > + vcpu->arch.apic->vcpu = vcpu;Whoops! Missing NULL pointer check here after the kzalloc.> + copy_to_user(regs->saved_guest, &vcpu->arch.guest, > + sizeof(union context)); > + copy_to_user(regs->saved_stack, (void *)vcpu, IA64_STK_OFFSET);Same problem as above - check the return values. Cheers, Jes
Carsten Otte
2008-Mar-31 14:52 UTC
[04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Zhang, Xiantao wrote:> +static struct kvm_vcpu *lid_to_vcpu(struct kvm *kvm, unsigned long id, > + unsigned long eid) > +{ > + ia64_lid_t lid; > + int i; > + > + for (i = 0; i < KVM_MAX_VCPUS; i++) { > + if (kvm->vcpus[i]) { > + lid.val = VCPU_LID(kvm->vcpus[i]); > + if (lid.id == id && lid.eid == eid) > + return kvm->vcpus[i]; > + } > + } > + > + return NULL; > +} > + > +static int handle_ipi(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) > +{ > + struct exit_ctl_data *p = kvm_get_exit_data(vcpu); > + struct kvm_vcpu *target_vcpu; > + struct kvm_pt_regs *regs; > + ia64_ipi_a addr = p->u.ipi_data.addr; > + ia64_ipi_d data = p->u.ipi_data.data; > + > + target_vcpu = lid_to_vcpu(vcpu->kvm, addr.id, addr.eid); > + if (!target_vcpu) > + return handle_vm_error(vcpu, kvm_run); > + > + if (!target_vcpu->arch.launched) { > + regs = vcpu_regs(target_vcpu); > + > + regs->cr_iip = vcpu->kvm->arch.rdv_sal_data.boot_ip; > + regs->r1 = vcpu->kvm->arch.rdv_sal_data.boot_gp; > + > + target_vcpu->arch.mp_state = VCPU_MP_STATE_RUNNABLE; > + if (waitqueue_active(&target_vcpu->wq)) > + wake_up_interruptible(&target_vcpu->wq); > + } else { > + vcpu_deliver_ipi(target_vcpu, data.dm, data.vector); > + if (target_vcpu != vcpu) > + kvm_vcpu_kick(target_vcpu); > + } > + > + return 1; > +}*Shrug*. This looks highly racy to me. You do access various values in target_vcpu without any lock! I know that taking the target vcpu's lock does'nt work because that one is held all the time during KVM_VCPU_RUN. My solution to that was struct local_interrupt, which has its own lock, and has the waitqueue plus everything I need to send a sigp [that's our flavor of ipi].> +int kvm_emulate_halt(struct kvm_vcpu *vcpu) > +{ > + > + ktime_t kt; > + long itc_diff; > + unsigned long vcpu_now_itc; > + > + unsigned long expires; > + struct hrtimer *p_ht = &vcpu->arch.hlt_timer;That makes me jealous, I'd love to have hrtimer on s390 for this. I've got to round up to the next jiffie. *Sigh*> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, > + struct kvm_sregs *sregs) > +{ > + printk(KERN_WARNING"kvm:kvm_arch_vcpu_ioctl_set_sregs > called!!\n"); > + return 0; > +} > + > +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, > + struct kvm_sregs *sregs) > +{ > + printk(KERN_WARNING"kvm:kvm_arch_vcpu_ioctl_get_sregs > called!!\n"); > + return 0; > + > +}Suggestion: if get/set sregs does'nt seem useful on ia64, why not return -EINVAL? In that case, you could also not print a kern warning, the user will either handle that situation or complain.> +int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) > +{<snip>> + /*FIXME:Need to removed it later!!\n*/ > + vcpu->arch.apic = kzalloc(sizeof(struct kvm_lapic), GFP_KERNEL); > + vcpu->arch.apic->vcpu = vcpu;Fixme!> +static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id) > +{ > + unsigned long psr; > + int r; > + > + local_irq_save(psr); > + r = kvm_insert_vmm_mapping(vcpu); > + if (r) > + goto fail; > + r = kvm_vcpu_init(vcpu, vcpu->kvm, id); > + if (r) > + goto fail;Maybe change to return r, rather then goto fail?> +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu > *fpu) > +{ > + printk(KERN_WARNING"kvm:IA64 doesn't need to export" > + "fpu to userspace!\n"); > + return 0; > +} > + > +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu > *fpu) > +{ > + printk(KERN_WARNING"kvm:IA64 doesn't need to export" > + "fpu to userspace !\n"); > + return 0; > +}maybe -EINVAL?> +static int find_highest_bits(int *dat) > +{ > + u32 bits, bitnum; > + int i; > + > + /* loop for all 256 bits */ > + for (i = 7; i >= 0 ; i--) { > + bits = dat[i]; > + if (bits) { > + bitnum = fls(bits); > + return i * 32 + bitnum - 1; > + } > + } > + > + return -1; > +}Should be in asm/bitops.h. Look at find_first_bit() and friends, this is duplicate.
Carsten Otte
2008-Apr-01 07:53 UTC
[04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Zhang, Xiantao wrote:> Hi, Carsten > Why do you think it is racy? In this function, > target_vcpu->arch.launched should be set to 1 for the first run, and > keep its value all the time. Except the first IPI to wake up the vcpu, > all IPIs received by target vcpu should go into "else" condition. So you > mean the race condition exist in "else" code ?For example to lock against destroying that vcpu. Or, the waitqueue may become active after if (waitqueue_active()) and before wake_up_interruptible(). In that case, the target vcpu might sleep and not get waken up by the ipi.
Carsten Otte
2008-Apr-01 10:59 UTC
[04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Zhang, Xiantao wrote:> Carsten Otte wrote: >> Zhang, Xiantao wrote: >>> Hi, Carsten >>> Why do you think it is racy? In this function, >>> target_vcpu->arch.launched should be set to 1 for the first run, and >>> keep its value all the time. Except the first IPI to wake up the >>> vcpu, all IPIs received by target vcpu should go into "else" >>> condition. So you mean the race condition exist in "else" code ? >> For example to lock against destroying that vcpu. Or, the waitqueue >> may become active after if (waitqueue_active()) and before >> wake_up_interruptible(). In that case, the target vcpu might sleep and >> not get waken up by the ipi. > I don't think it may cause issue, because the target vcpu at least can > be waken up by the timer interrupt. > > But as you said, x86 side also have the same race issue ?As far as I can tell, x86 does'nt have that race.
Carsten Otte
2008-Apr-01 11:49 UTC
[04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
Zhang, Xiantao wrote:> Carsten Otte wrote: >> Zhang, Xiantao wrote: >>> Carsten Otte wrote: >>>> Zhang, Xiantao wrote: >>>>> Hi, Carsten >>>>> Why do you think it is racy? In this function, >>>>> target_vcpu->arch.launched should be set to 1 for the first run, >>>>> and keep its value all the time. Except the first IPI to wake up >>>>> the vcpu, all IPIs received by target vcpu should go into "else" >>>>> condition. So you mean the race condition exist in "else" code ? >>>> For example to lock against destroying that vcpu. Or, the waitqueue >>>> may become active after if (waitqueue_active()) and before >>>> wake_up_interruptible(). In that case, the target vcpu might sleep >>>> and not get waken up by the ipi. >>> I don't think it may cause issue, because the target vcpu at least >>> can be waken up by the timer interrupt. >>> >>> But as you said, x86 side also have the same race issue ? >> As far as I can tell, x86 does'nt have that race. > > Hi, Carsten > I can't understand why it only exist at IA64 side. Thank you! > XiantaoWell, x86 does'nt signal the target processor by accessing the vcpu data structure. They use the IPI signal for that as far as I can see. And s390 does have an explicit lock for this purpose. Itanium however, does not have a lock but does access the target vcpu struct.
Possibly Parallel Threads
- [04/17] [PATCH] Add kvm arch-specific core code for kvm/ia64.-V8
- [05/17][PATCH] kvm/ia64 : Add head files for kvm/ia64
- [05/17][PATCH] kvm/ia64 : Add head files for kvm/ia64
- [PATCH] Add lost logic for VGA initialization
- [03/15][PATCH] kvm/ia64: Add header files for kvm/ia64. V8