When I reading changeset 17962, I have several question to it, can anyone give me some hint to it? a) The major part of the patch is the affinity for NMI/MCE, what''s the relationship between this part with the subject (i.e. the suspend event channel)? b) It stated that user space tools should "prevent multiple subscribers", are there any special reason we have to leave this to user space? I assume some simple checking in Xen should avoid the multiple subscribe, or do I miss something important? Thanks Yunhong Jiang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 09/03/2009 07:53, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:> When I reading changeset 17962, I have several question to it, can anyone give > me some hint to it? > > a) The major part of the patch is the affinity for NMI/MCE, what''s the > relationship between this part with the subject (i.e. the suspend event > channel)?I incorrectly checked in two patches at once. You should read 17962+17964 together. For (b), Xen itself has okay semantics -- the most recent caller to set the suspend_evtchn always wins. How tools make use of that policy is up to them -- since we can only have one save process per domain at a time, it all works out fine. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
xen-devel-bounces@lists.xensource.com <> wrote:> On 09/03/2009 07:53, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > >> When I reading changeset 17962, I have several question to it, can anyone >> give me some hint to it? >> >> a) The major part of the patch is the affinity for NMI/MCE, what''s the >> relationship between this part with the subject (i.e. the suspend event >> channel)? > > I incorrectly checked in two patches at once. You should read 17962+17964 > together.Got it, thanks!> > For (b), Xen itself has okay semantics -- the most recent > caller to set the > suspend_evtchn always wins. How tools make use of that policy > is up to them > -- since we can only have one save process per domain at a time, it all > works out fine.Are there any special reason that not the first caller hold it (which is more nature IMO), and the later caller will failed? Thanks -- Yunhong Jiang> > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 09/03/2009 09:25, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:>> For (b), Xen itself has okay semantics -- the most recent >> caller to set the >> suspend_evtchn always wins. How tools make use of that policy >> is up to them >> -- since we can only have one save process per domain at a time, it all >> works out fine. > > Are there any special reason that not the first caller hold it (which is more > nature IMO), and the later caller will failed?The only reason I can think is if the xc_save process fails and exit()s and then we want to continue execution of the domain and maybe try xc_save again later. Then the first registered evtchn won''t be cleaned up and we would like to overwrite it when we next try xc_save. Arguably we should make the kernel evtchn driver aware of suspend evtchns and clean them up on process destruction. Then we could tighten up Xen''s checking. But... It''s all kind of a hassle for hardly any reward! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Monday, 09 March 2009 at 09:44, Keir Fraser wrote:> On 09/03/2009 09:25, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > > >> For (b), Xen itself has okay semantics -- the most recent > >> caller to set the > >> suspend_evtchn always wins. How tools make use of that policy > >> is up to them > >> works out fine. > > > > Are there any special reason that not the first caller hold it (which is more > > nature IMO), and the later caller will failed? > > The only reason I can think is if the xc_save process fails and exit()s and > then we want to continue execution of the domain and maybe try xc_save again > later. Then the first registered evtchn won''t be cleaned up and we would > like to overwrite it when we next try xc_save.That was the idea. If tools want to make the first user win, they can agree on a locking strategy between themselves.> Arguably we should make the kernel evtchn driver aware of suspend evtchns > and clean them up on process destruction. Then we could tighten up Xen''s > checking. But... It''s all kind of a hassle for hardly any reward!Agreed :) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Brendan Cully <mailto:brendan@cs.ubc.ca> wrote:> On Monday, 09 March 2009 at 09:44, Keir Fraser wrote: >> On 09/03/2009 09:25, "Jiang, Yunhong" > <yunhong.jiang@intel.com> wrote: >> >>>> For (b), Xen itself has okay semantics -- the most recent caller to set >>>> the suspend_evtchn always wins. How tools make use of that policy is up >>>> to them works out fine. >>> >>> Are there any special reason that not the first caller hold it (which is >>> more nature IMO), and the later caller will failed? >> >> The only reason I can think is if the xc_save process fails and exit()s and >> then we want to continue execution of the domain and maybe try xc_save >> again later. Then the first registered evtchn won''t be cleaned up and we >> would like to overwrite it when we next try xc_save. > > That was the idea. If tools want to make the first user win, they can > agree on a locking strategy between themselves. > >> Arguably we should make the kernel evtchn driver aware of suspend evtchns >> and clean them up on process destruction. Then we could tighten up Xen''s >> checking. But... It''s all kind of a hassle for hardly any reward! > > Agreed :)Brendan/Keir, thanks for your clarification. I asked this because according discussion with Tim, we will utilize this feature for page offline also, that means multiple process will utilize this feature. I will create something in tools to achieve this. Thanks Yunhong Jiang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 10/03/2009 01:47, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:>> Agreed :) > > Brendan/Keir, thanks for your clarification. I asked this because according > discussion with Tim, we will utilize this feature for page offline also, that > means multiple process will utilize this feature. > I will create something in tools to achieve this.Well hang on. Only one process can likely safely suspend and work on a guest at a time. Unless you do some serialisation in the toolstack (probably xend) then you''re going to be racey aren''t you? I don''t think you need to serialise in the hypervisor. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sure, I will not do serialization in hypervisor. I have try the implemetnation in libxc side, not xend. -- Yunhong Jiang Keir Fraser <mailto:keir.fraser@eu.citrix.com> wrote:> On 10/03/2009 01:47, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > >>> Agreed :) >> >> Brendan/Keir, thanks for your clarification. I asked this because according >> discussion with Tim, we will utilize this feature for page offline also, >> that means multiple process will utilize this feature. >> I will create something in tools to achieve this. > > Well hang on. Only one process can likely safely suspend and > work on a guest > at a time. Unless you do some serialisation in the toolstack (probably xend) > then you''re going to be racey aren''t you? I don''t think you need to > serialise in the hypervisor. > > -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Okay, I misread your original email. Tools it is then! -- Keir On 10/03/2009 08:29, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:> Sure, I will not do serialization in hypervisor. > I have try the implemetnation in libxc side, not xend. > > -- Yunhong Jiang > > Keir Fraser <mailto:keir.fraser@eu.citrix.com> wrote: >> On 10/03/2009 01:47, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: >> >>>> Agreed :) >>> >>> Brendan/Keir, thanks for your clarification. I asked this because according >>> discussion with Tim, we will utilize this feature for page offline also, >>> that means multiple process will utilize this feature. >>> I will create something in tools to achieve this. >> >> Well hang on. Only one process can likely safely suspend and >> work on a guest >> at a time. Unless you do some serialisation in the toolstack (probably xend) >> then you''re going to be racey aren''t you? I don''t think you need to >> serialise in the hypervisor. >> >> -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel