Nathan Studer
2013-Nov-04 03:03 UTC
[Patch] Call sched_destroy_domain before cpupool_rm_domain.
From: Nathan Studer <nate.studer@dornerworks.com> The domain destruction code, removes a domain from its cpupool before attempting to destroy its scheduler information. Since the scheduler framework uses the domain''s cpupool information to decide on which scheduler ops to use, this results in the the wrong scheduler''s destroy domain function being called when the cpupool scheduler and the initial scheduler are different. Correct this by destroying the domain''s scheduling information before removing it from the pool. Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> --- xen/common/domain.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 5999779..78ce968 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -727,10 +727,10 @@ static void complete_domain_destroy(struct rcu_head *head) rangeset_domain_destroy(d); - cpupool_rm_domain(d); - sched_destroy_domain(d); + cpupool_rm_domain(d); + /* Free page used by xen oprofile buffer. */ #ifdef CONFIG_XENOPROF free_xenoprof_pages(d); -- 1.7.9.5
Juergen Gross
2013-Nov-04 06:30 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04.11.2013 04:03, Nathan Studer wrote:> From: Nathan Studer <nate.studer@dornerworks.com> > > The domain destruction code, removes a domain from its cpupool > before attempting to destroy its scheduler information. Since > the scheduler framework uses the domain''s cpupool information > to decide on which scheduler ops to use, this results in the > the wrong scheduler''s destroy domain function being called > when the cpupool scheduler and the initial scheduler are > different. > > Correct this by destroying the domain''s scheduling information > before removing it from the pool. > > Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com>> --- > xen/common/domain.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/xen/common/domain.c b/xen/common/domain.c > index 5999779..78ce968 100644 > --- a/xen/common/domain.c > +++ b/xen/common/domain.c > @@ -727,10 +727,10 @@ static void complete_domain_destroy(struct rcu_head *head) > > rangeset_domain_destroy(d); > > - cpupool_rm_domain(d); > - > sched_destroy_domain(d); > > + cpupool_rm_domain(d); > + > /* Free page used by xen oprofile buffer. */ > #ifdef CONFIG_XENOPROF > free_xenoprof_pages(d); >-- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 62060 2932 Fujitsu e-mail: juergen.gross@ts.fujitsu.com Mies-van-der-Rohe-Str. 8 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Dario Faggioli
2013-Nov-04 09:26 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On lun, 2013-11-04 at 07:30 +0100, Juergen Gross wrote:> On 04.11.2013 04:03, Nathan Studer wrote: > > From: Nathan Studer <nate.studer@dornerworks.com> > > > > The domain destruction code, removes a domain from its cpupool > > before attempting to destroy its scheduler information. Since > > the scheduler framework uses the domain''s cpupool information > > to decide on which scheduler ops to use, this results in the > > the wrong scheduler''s destroy domain function being called > > when the cpupool scheduler and the initial scheduler are > > different. > > > > Correct this by destroying the domain''s scheduling information > > before removing it from the pool. > > > > Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> > > Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com> >I think this is a candidate for backports too, isn''t it? Nathan, what was happening without this patch? Are you able to quickly figure out what previous Xen versions suffers from the same bug? Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Andrew Cooper
2013-Nov-04 09:33 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04/11/13 06:30, Juergen Gross wrote:> On 04.11.2013 04:03, Nathan Studer wrote: >> From: Nathan Studer <nate.studer@dornerworks.com> >> >> The domain destruction code, removes a domain from its cpupool >> before attempting to destroy its scheduler information. Since >> the scheduler framework uses the domain''s cpupool information >> to decide on which scheduler ops to use, this results in the >> the wrong scheduler''s destroy domain function being called >> when the cpupool scheduler and the initial scheduler are >> different. >> >> Correct this by destroying the domain''s scheduling information >> before removing it from the pool. >> >> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> > > Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com>Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>> >> --- >> xen/common/domain.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/xen/common/domain.c b/xen/common/domain.c >> index 5999779..78ce968 100644 >> --- a/xen/common/domain.c >> +++ b/xen/common/domain.c >> @@ -727,10 +727,10 @@ static void complete_domain_destroy(struct >> rcu_head *head) >> >> rangeset_domain_destroy(d); >> >> - cpupool_rm_domain(d); >> - >> sched_destroy_domain(d); >> >> + cpupool_rm_domain(d); >> + >> /* Free page used by xen oprofile buffer. */ >> #ifdef CONFIG_XENOPROF >> free_xenoprof_pages(d); >> > >
Juergen Gross
2013-Nov-04 09:58 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04.11.2013 10:26, Dario Faggioli wrote:> On lun, 2013-11-04 at 07:30 +0100, Juergen Gross wrote: >> On 04.11.2013 04:03, Nathan Studer wrote: >>> From: Nathan Studer <nate.studer@dornerworks.com> >>> >>> The domain destruction code, removes a domain from its cpupool >>> before attempting to destroy its scheduler information. Since >>> the scheduler framework uses the domain''s cpupool information >>> to decide on which scheduler ops to use, this results in the >>> the wrong scheduler''s destroy domain function being called >>> when the cpupool scheduler and the initial scheduler are >>> different. >>> >>> Correct this by destroying the domain''s scheduling information >>> before removing it from the pool. >>> >>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> >> >> Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com> >> > I think this is a candidate for backports too, isn''t it? > > Nathan, what was happening without this patch? Are you able to quickly > figure out what previous Xen versions suffers from the same bug?In theory this bug is present since 4.1. OTOH it will be hit only with arinc653 scheduler in a cpupool other than Pool-0. And I don''t see how this is being supported by arinc653 today (pick_cpu will always return 0). All other schedulers will just call xfree() for the domain specific data (and may be update some statistic data, which is not critical). Juergen -- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 62060 2932 Fujitsu e-mail: juergen.gross@ts.fujitsu.com Mies-van-der-Rohe-Str. 8 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
George Dunlap
2013-Nov-04 15:10 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04/11/13 03:03, Nathan Studer wrote:> From: Nathan Studer <nate.studer@dornerworks.com> > > The domain destruction code, removes a domain from its cpupool > before attempting to destroy its scheduler information. Since > the scheduler framework uses the domain''s cpupool information > to decide on which scheduler ops to use, this results in the > the wrong scheduler''s destroy domain function being called > when the cpupool scheduler and the initial scheduler are > different. > > Correct this by destroying the domain''s scheduling information > before removing it from the pool. > > Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com> Thanks! -George
Nate Studer
2013-Nov-04 15:22 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 11/4/2013 4:58 AM, Juergen Gross wrote:> On 04.11.2013 10:26, Dario Faggioli wrote: >> On lun, 2013-11-04 at 07:30 +0100, Juergen Gross wrote: >>> On 04.11.2013 04:03, Nathan Studer wrote: >>>> From: Nathan Studer <nate.studer@dornerworks.com> >>>> >>>> The domain destruction code, removes a domain from its cpupool >>>> before attempting to destroy its scheduler information. Since >>>> the scheduler framework uses the domain''s cpupool information >>>> to decide on which scheduler ops to use, this results in the >>>> the wrong scheduler''s destroy domain function being called >>>> when the cpupool scheduler and the initial scheduler are >>>> different. >>>> >>>> Correct this by destroying the domain''s scheduling information >>>> before removing it from the pool. >>>> >>>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> >>> >>> Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com> >>> >> I think this is a candidate for backports too, isn''t it? >> >> Nathan, what was happening without this patch? Are you able to quickly >> figure out what previous Xen versions suffers from the same bug?Various things: If I used the credit scheduler in Pool-0 and the arinc653 scheduler in a cpupool the other pool, it would: 1. Hit a BUG_ON in the arinc653 scheduler. 2. Hit an assert in the scheduling framework code. 3. Or crash in the credit scheduler''s csched_free_domdata function. The latter clued me in that the wrong scheduler''s destroy function was somehow being called. If I used the credit2 scheduler in the other pool, I would only ever see the latter. Similarly, if I used the sedf scheduler in the other pool, I would only ever see the latter. However when using the sedf scheduler I would have to create and destroy the domain twice, instead of just once.> > In theory this bug is present since 4.1. > > OTOH it will be hit only with arinc653 scheduler in a cpupool other than > Pool-0. And I don''t see how this is being supported by arinc653 today (pick_cpu > will always return 0).Correct, the arinc653 scheduler currently does not work with cpupools. We are working on remedying that though, which is how I ran into this. I would have just wrapped this patch in with the upcoming arinc653 ones, if I had not run into the same issue with the other schedulers.> > All other schedulers will just call xfree() for the domain specific data (and > may be update some statistic data, which is not critical).The credit and credit2 schedulers do a bit more than that in their free_domdata functions. The credit scheduler frees the node_affinity_cpumask contained in the domain data and the credit2 scheduler deletes a list element contained in the domain data. Since with this bug they are accessing structures that do not belong to them, bad things happen. With the credit scheduler in Pool-0, the result should be an invalid free and an eventual crash. With the credit2 scheduler in Pool-0, the effects might be a be more unpredictable. At best it should result in an invalid pointer dereference. Likewise, since the other schedulers do not do this additional work, there would probably be other issues if the sedf or arinc653 scheduler was running in Pool-0 and one of the credit schedulers was run in the other pool. I do not know enough about the credit scheduler to make any predictions about what would happen though.> > > Juergen >
Juergen Gross
2013-Nov-05 05:59 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04.11.2013 16:22, Nate Studer wrote:> On 11/4/2013 4:58 AM, Juergen Gross wrote: >> On 04.11.2013 10:26, Dario Faggioli wrote: >>> On lun, 2013-11-04 at 07:30 +0100, Juergen Gross wrote: >>>> On 04.11.2013 04:03, Nathan Studer wrote: >>>>> From: Nathan Studer <nate.studer@dornerworks.com> >>>>> >>>>> The domain destruction code, removes a domain from its cpupool >>>>> before attempting to destroy its scheduler information. Since >>>>> the scheduler framework uses the domain''s cpupool information >>>>> to decide on which scheduler ops to use, this results in the >>>>> the wrong scheduler''s destroy domain function being called >>>>> when the cpupool scheduler and the initial scheduler are >>>>> different. >>>>> >>>>> Correct this by destroying the domain''s scheduling information >>>>> before removing it from the pool. >>>>> >>>>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> >>>> >>>> Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com> >>>> >>> I think this is a candidate for backports too, isn''t it? >>> >>> Nathan, what was happening without this patch? Are you able to quickly >>> figure out what previous Xen versions suffers from the same bug? > > Various things: > > If I used the credit scheduler in Pool-0 and the arinc653 scheduler in a cpupool > the other pool, it would: > 1. Hit a BUG_ON in the arinc653 scheduler. > 2. Hit an assert in the scheduling framework code. > 3. Or crash in the credit scheduler''s csched_free_domdata function. > > The latter clued me in that the wrong scheduler''s destroy function was somehow > being called. > > If I used the credit2 scheduler in the other pool, I would only ever see the latter. > > Similarly, if I used the sedf scheduler in the other pool, I would only ever see > the latter. However when using the sedf scheduler I would have to create and > destroy the domain twice, instead of just once. > >> >> In theory this bug is present since 4.1. >> >> OTOH it will be hit only with arinc653 scheduler in a cpupool other than >> Pool-0. And I don''t see how this is being supported by arinc653 today (pick_cpu >> will always return 0). > > Correct, the arinc653 scheduler currently does not work with cpupools. We are > working on remedying that though, which is how I ran into this. I would have > just wrapped this patch in with the upcoming arinc653 ones, if I had not run > into the same issue with the other schedulers. > >> >> All other schedulers will just call xfree() for the domain specific data (and >> may be update some statistic data, which is not critical). > > The credit and credit2 schedulers do a bit more than that in their free_domdata > functions.Sorry, got not enough sleep on the weekend ;-) I checked only 4.1 and 4.2 trees. There only xfree of the domain data is done.> > The credit scheduler frees the node_affinity_cpumask contained in the domain > data and the credit2 scheduler deletes a list element contained in the domain > data. Since with this bug they are accessing structures that do not belong to > them, bad things happen.So the patch would be subject to a 4.3 backport, I think. Juergen -- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 62060 2932 Fujitsu e-mail: juergen.gross@ts.fujitsu.com Mies-van-der-Rohe-Str. 8 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Keir Fraser
2013-Nov-05 21:09 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 04/11/2013 09:33, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:> On 04/11/13 06:30, Juergen Gross wrote: >> On 04.11.2013 04:03, Nathan Studer wrote: >>> From: Nathan Studer <nate.studer@dornerworks.com> >>> >>> The domain destruction code, removes a domain from its cpupool >>> before attempting to destroy its scheduler information. Since >>> the scheduler framework uses the domain''s cpupool information >>> to decide on which scheduler ops to use, this results in the >>> the wrong scheduler''s destroy domain function being called >>> when the cpupool scheduler and the initial scheduler are >>> different. >>> >>> Correct this by destroying the domain''s scheduling information >>> before removing it from the pool. >>> >>> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com> >> >> Reviewed-by: Juergen Gross <juergen.gross@ts.fujitsu.com> > > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>Acked-by: Keir Fraser <keir@xen.org>
Jan Beulich
2013-Nov-07 07:39 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
>>> On 05.11.13 at 06:59, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: > On 04.11.2013 16:22, Nate Studer wrote: >> On 11/4/2013 4:58 AM, Juergen Gross wrote: >>> All other schedulers will just call xfree() for the domain specific data (and >>> may be update some statistic data, which is not critical). >> >> The credit and credit2 schedulers do a bit more than that in their free_domdata >> functions. > > Sorry, got not enough sleep on the weekend ;-) > > I checked only 4.1 and 4.2 trees. There only xfree of the domain data is > done. > >> >> The credit scheduler frees the node_affinity_cpumask contained in the domain >> data and the credit2 scheduler deletes a list element contained in the domain >> data. Since with this bug they are accessing structures that do not belong to >> them, bad things happen. > > So the patch would be subject to a 4.3 backport, I think.Hmm, I''m slightly confused: credit2''s free_domdata has always been doing more than just xfree() afaict, and hence backporting is either necessary uniformly or (taking into account that it was made clear that arinc doesn''t work with CPU pools anyway so far) not at all. Please clarify. Jan
Juergen Gross
2013-Nov-07 09:09 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 07.11.2013 08:39, Jan Beulich wrote:>>>> On 05.11.13 at 06:59, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: >> On 04.11.2013 16:22, Nate Studer wrote: >>> On 11/4/2013 4:58 AM, Juergen Gross wrote: >>>> All other schedulers will just call xfree() for the domain specific data (and >>>> may be update some statistic data, which is not critical). >>> >>> The credit and credit2 schedulers do a bit more than that in their free_domdata >>> functions. >> >> Sorry, got not enough sleep on the weekend ;-) >> >> I checked only 4.1 and 4.2 trees. There only xfree of the domain data is >> done. >> >>> >>> The credit scheduler frees the node_affinity_cpumask contained in the domain >>> data and the credit2 scheduler deletes a list element contained in the domain >>> data. Since with this bug they are accessing structures that do not belong to >>> them, bad things happen. >> >> So the patch would be subject to a 4.3 backport, I think. > > Hmm, I''m slightly confused: credit2''s free_domdata has always been > doing more than just xfree() afaict, and hence backporting is either > necessary uniformly or (taking into account that it was made clear > that arinc doesn''t work with CPU pools anyway so far) not at all. > > Please clarify.Okay, I assumed only "production ready" features are to be taken into account for a backport. And credit2 is clearly not in this state, or am I wrong? A 4.3 backport should be considered in any case, as sedf and credit schedulers behave differently in free_domdata, and both are "production ready". If you want to be safe for credit2 and/or arinc653 as well, backports to 4.2 and 4.1 will be required. In any case a backport isn''t very complex. :-) Juergen -- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 62060 2932 Fujitsu e-mail: juergen.gross@ts.fujitsu.com Mies-van-der-Rohe-Str. 8 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Jan Beulich
2013-Nov-07 09:37 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
>>> On 07.11.13 at 10:09, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: > On 07.11.2013 08:39, Jan Beulich wrote: >>>>> On 05.11.13 at 06:59, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: >>> On 04.11.2013 16:22, Nate Studer wrote: >>>> On 11/4/2013 4:58 AM, Juergen Gross wrote: >>>>> All other schedulers will just call xfree() for the domain specific data > (and >>>>> may be update some statistic data, which is not critical). >>>> >>>> The credit and credit2 schedulers do a bit more than that in their > free_domdata >>>> functions. >>> >>> Sorry, got not enough sleep on the weekend ;-) >>> >>> I checked only 4.1 and 4.2 trees. There only xfree of the domain data is >>> done. >>> >>>> >>>> The credit scheduler frees the node_affinity_cpumask contained in the domain >>>> data and the credit2 scheduler deletes a list element contained in the > domain >>>> data. Since with this bug they are accessing structures that do not belong > to >>>> them, bad things happen. >>> >>> So the patch would be subject to a 4.3 backport, I think. >> >> Hmm, I''m slightly confused: credit2''s free_domdata has always been >> doing more than just xfree() afaict, and hence backporting is either >> necessary uniformly or (taking into account that it was made clear >> that arinc doesn''t work with CPU pools anyway so far) not at all. >> >> Please clarify. > > Okay, I assumed only "production ready" features are to be taken into > account > for a backport. And credit2 is clearly not in this state, or am I wrong?You aren''t, but is arinc production ready? I wouldn''t think so simply based on it not working with CPU pools. And then the backporting question would become mute.> A 4.3 backport should be considered in any case, as sedf and credit > schedulers > behave differently in free_domdata, and both are "production ready". If you > want to be safe for credit2 and/or arinc653 as well, backports to 4.2 and > 4.1 > will be required. > > In any case a backport isn''t very complex. :-)Indeed. But I''d like backports to be on purpose as well as consistent across trees (iow: applied to all maintained trees where needed, and only there). Jan
Juergen Gross
2013-Nov-07 09:43 UTC
Re: [Patch] Call sched_destroy_domain before cpupool_rm_domain.
On 07.11.2013 10:37, Jan Beulich wrote:>>>> On 07.11.13 at 10:09, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: >> On 07.11.2013 08:39, Jan Beulich wrote: >>>>>> On 05.11.13 at 06:59, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: >>>> On 04.11.2013 16:22, Nate Studer wrote: >>>>> On 11/4/2013 4:58 AM, Juergen Gross wrote: >>>>>> All other schedulers will just call xfree() for the domain specific data >> (and >>>>>> may be update some statistic data, which is not critical). >>>>> >>>>> The credit and credit2 schedulers do a bit more than that in their >> free_domdata >>>>> functions. >>>> >>>> Sorry, got not enough sleep on the weekend ;-) >>>> >>>> I checked only 4.1 and 4.2 trees. There only xfree of the domain data is >>>> done. >>>> >>>>> >>>>> The credit scheduler frees the node_affinity_cpumask contained in the domain >>>>> data and the credit2 scheduler deletes a list element contained in the >> domain >>>>> data. Since with this bug they are accessing structures that do not belong >> to >>>>> them, bad things happen. >>>> >>>> So the patch would be subject to a 4.3 backport, I think. >>> >>> Hmm, I''m slightly confused: credit2''s free_domdata has always been >>> doing more than just xfree() afaict, and hence backporting is either >>> necessary uniformly or (taking into account that it was made clear >>> that arinc doesn''t work with CPU pools anyway so far) not at all. >>> >>> Please clarify. >> >> Okay, I assumed only "production ready" features are to be taken into >> account >> for a backport. And credit2 is clearly not in this state, or am I wrong? > > You aren''t, but is arinc production ready? I wouldn''t think so > simply based on it not working with CPU pools. And then the > backporting question would become mute.No, it doesn''t. The following statement should have made that clear:>> A 4.3 backport should be considered in any case, as sedf and credit >> schedulers >> behave differently in free_domdata, and both are "production ready".If you have credit as default scheduler and use sedf in a cpupool, destroying a domain in the cpupool with sedf will use the credit free_domdata routine, leading to an error in 4.3 when calling free_cpumask_var(). Juergen -- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 62060 2932 Fujitsu e-mail: juergen.gross@ts.fujitsu.com Mies-van-der-Rohe-Str. 8 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html