As release coordinator, and therefore de-facto tracker-of-bugs-and-regressions, it seems to me that one of the shortcomings in our current development mode is a lack of good regression testing for many of the less common, but still very important features of Xen; things like S3, driver domains, nested virt, and so on. We do have a regression-testing push gate for the xen trees, written by Ian Jackson, called osstest, with a lot of great functionality, including a test scheduler, an automatic Baysean bisector, and so on. Apart from more hardware, it is mainly lacking a more complete set of tests. The Xen Project team here at Citrix agreed with me, and have decided during the 4.4 timeframe to take on the task of adding some important functional tests to osstest; our list is below. We''d love for you to join us. I will be tracking the implementation of the tests as part of the regular 4.4 release updates. I encourage people to join us by thinking about a particular feature or bit of functionality that is not yet tested by osstest, which you would like to see implemented in the 4.4 timeframe, and responding to this e-mail, or one of the regular development update e-mails asking it to be added. There is a description of osstest, along with a link to the source code, here: http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ === Testing coverage == * Network driver domains @George * new libxl w/ previous versions of xl @IanJ * Host S3 suspend @bguthro? * Default [example] XSM policy @Stefano to ask Daniel D * Xen on ARM # problem ATM: hardware @ianc emulator: @stefano to think about it * Storage driver domains @roger * HVM pci passthrough @anthony * Nested virt? @intel (chased by George) * Fix SRIOV test (chase intel) @ianj * Fix bisector to e-mail blame-worthy parties @ianj * Fix xl shutdown @ianj * stub domains @athony
Thursday, August 8, 2013, 6:01:01 PM, you wrote:> As release coordinator, and therefore de-facto > tracker-of-bugs-and-regressions, it seems to me that one of the > shortcomings in our current development mode is a lack of good > regression testing for many of the less common, but still very > important features of Xen; things like S3, driver domains, nested > virt, and so on.> We do have a regression-testing push gate for the xen trees, written > by Ian Jackson, called osstest, with a lot of great functionality, > including a test scheduler, an automatic Baysean bisector, and so on. > Apart from more hardware, it is mainly lacking a more complete set of > tests.> The Xen Project team here at Citrix agreed with me, and have decided > during the 4.4 timeframe to take on the task of adding some important > functional tests to osstest; our list is below. We''d love for you to > join us.> I will be tracking the implementation of the tests as part of the > regular 4.4 release updates. I encourage people to join us by > thinking about a particular feature or bit of functionality that is > not yet tested by osstest, which you would like to see implemented in > the 4.4 timeframe, and responding to this e-mail, or one of the > regular development update e-mails asking it to be added.> There is a description of osstest, along with a link to the source code, here:> http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/> === Testing coverage == > * Network driver domains > @George> * new libxl w/ previous versions of xl > @IanJ> * Host S3 suspend > @bguthro?> * Default [example] XSM policy > @Stefano to ask Daniel D> * Xen on ARM > # problem ATM: hardware > @ianc > emulator: @stefano to think about it> * Storage driver domains > @roger> * HVM pci passthrough > @anthony> * Nested virt? > @intel (chased by George)> * Fix SRIOV test (chase intel) > @ianj> * Fix bisector to e-mail blame-worthy parties > @ianj> * Fix xl shutdown > @ianj:-)> * stub domains > @athonySome ideas, perhaps only with a xen version after a push (so Xen shouldn''t be directly to blame) : * More current kernels: - latest stable ? - linux-next ? (although perhaps only just before merge window, to prevent spending to much time in breaking on other random kernel stuff) - linus''s tree ? - xen kernel tree''s next branch * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. - only after all basic tests succeeded and a push was done. -- Sander
Thursday, August 8, 2013, 6:01:01 PM, you wrote:> As release coordinator, and therefore de-facto > tracker-of-bugs-and-regressions, it seems to me that one of the > shortcomings in our current development mode is a lack of good > regression testing for many of the less common, but still very > important features of Xen; things like S3, driver domains, nested > virt, and so on.> We do have a regression-testing push gate for the xen trees, written > by Ian Jackson, called osstest, with a lot of great functionality, > including a test scheduler, an automatic Baysean bisector, and so on. > Apart from more hardware, it is mainly lacking a more complete set of > tests.> The Xen Project team here at Citrix agreed with me, and have decided > during the 4.4 timeframe to take on the task of adding some important > functional tests to osstest; our list is below. We''d love for you to > join us.> I will be tracking the implementation of the tests as part of the > regular 4.4 release updates. I encourage people to join us by > thinking about a particular feature or bit of functionality that is > not yet tested by osstest, which you would like to see implemented in > the 4.4 timeframe, and responding to this e-mail, or one of the > regular development update e-mails asking it to be added.> There is a description of osstest, along with a link to the source code, here:> http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/> === Testing coverage == > * Network driver domains > @George> * new libxl w/ previous versions of xl > @IanJ> * Host S3 suspend > @bguthro?> * Default [example] XSM policy > @Stefano to ask Daniel D> * Xen on ARM > # problem ATM: hardware > @ianc > emulator: @stefano to think about it> * Storage driver domains > @roger> * HVM pci passthrough > @anthony> * Nested virt? > @intel (chased by George)> * Fix SRIOV test (chase intel) > @ianj> * Fix bisector to e-mail blame-worthy parties > @ianj> * Fix xl shutdown > @ianj> * stub domains > @athonySlightly related, Is xend/xm support going to be dropped for 4.4 ? If so, that could spare quite some testing resources, so perhaps nice to know rather sooner than later ? -- Sander
On Thu, Aug 08, 2013 at 05:01:01PM +0100, George Dunlap wrote:> As release coordinator, and therefore de-facto > tracker-of-bugs-and-regressions, it seems to me that one of the > shortcomings in our current development mode is a lack of good > regression testing for many of the less common, but still very > important features of Xen; things like S3, driver domains, nested > virt, and so on. > > We do have a regression-testing push gate for the xen trees, written > by Ian Jackson, called osstest, with a lot of great functionality, > including a test scheduler, an automatic Baysean bisector, and so on. > Apart from more hardware, it is mainly lacking a more complete set of > tests. > > The Xen Project team here at Citrix agreed with me, and have decided > during the 4.4 timeframe to take on the task of adding some important > functional tests to osstest; our list is below. We''d love for you to > join us. > > I will be tracking the implementation of the tests as part of the > regular 4.4 release updates. I encourage people to join us by > thinking about a particular feature or bit of functionality that is > not yet tested by osstest, which you would like to see implemented in > the 4.4 timeframe, and responding to this e-mail, or one of the > regular development update e-mails asking it to be added. > > There is a description of osstest, along with a link to the source code, here: > > http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ > > === Testing coverage ==> > * Network driver domains > @George > > * new libxl w/ previous versions of xl > @IanJ > > * Host S3 suspend > @bguthro? > > * Default [example] XSM policy > @Stefano to ask Daniel D > > * Xen on ARM > # problem ATM: hardware > @ianc > emulator: @stefano to think about it > > * Storage driver domains > @roger > > * HVM pci passthrough > @anthonyPV pci passthrough would be nice too. You can assign me to it, thought I would need some help in groking the osstest thingy. Maybe I will just copy what @anthony is doing.> > * Nested virt? > @intel (chased by George) > > * Fix SRIOV test (chase intel) > @ianj > > * Fix bisector to e-mail blame-worthy parties > @ianj > > * Fix xl shutdown > @ianj > > * stub domains > @athony > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel
On Thu, Aug 08, 2013 at 07:32:21PM +0200, Sander Eikelenboom wrote:> > Thursday, August 8, 2013, 6:01:01 PM, you wrote: > > > As release coordinator, and therefore de-facto > > tracker-of-bugs-and-regressions, it seems to me that one of the > > shortcomings in our current development mode is a lack of good > > regression testing for many of the less common, but still very > > important features of Xen; things like S3, driver domains, nested > > virt, and so on. > > > We do have a regression-testing push gate for the xen trees, written > > by Ian Jackson, called osstest, with a lot of great functionality, > > including a test scheduler, an automatic Baysean bisector, and so on. > > Apart from more hardware, it is mainly lacking a more complete set of > > tests. > > > The Xen Project team here at Citrix agreed with me, and have decided > > during the 4.4 timeframe to take on the task of adding some important > > functional tests to osstest; our list is below. We''d love for you to > > join us. > > > I will be tracking the implementation of the tests as part of the > > regular 4.4 release updates. I encourage people to join us by > > thinking about a particular feature or bit of functionality that is > > not yet tested by osstest, which you would like to see implemented in > > the 4.4 timeframe, and responding to this e-mail, or one of the > > regular development update e-mails asking it to be added. > > > There is a description of osstest, along with a link to the source code, here: > > > http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ > > > === Testing coverage ==> > > * Network driver domains > > @George > > > * new libxl w/ previous versions of xl > > @IanJ > > > * Host S3 suspend > > @bguthro? > > > * Default [example] XSM policy > > @Stefano to ask Daniel D > > > * Xen on ARM > > # problem ATM: hardware > > @ianc > > emulator: @stefano to think about it > > > * Storage driver domains > > @roger > > > * HVM pci passthrough > > @anthony > > > * Nested virt? > > @intel (chased by George) > > > * Fix SRIOV test (chase intel) > > @ianj > > > * Fix bisector to e-mail blame-worthy parties > > @ianj > > > * Fix xl shutdown > > @ianj > > :-) > > > * stub domains > > @athony > > Some ideas, perhaps only with a xen version after a push (so Xen shouldn''t be directly to blame) : > > * More current kernels: > - latest stable ?No. Too much to chase.> - linux-next ? (although perhaps only just before merge window, to prevent spending to much time in breaking on other random kernel stuff)No. Just tip/tip.git. That is the x86 folks tip. That had caused headaches from us in the past.> - linus''s tree ?Yes.> - xen kernel tree''s next branchYes. That is the xen/tip.git which has the Xen generic, x86 and ARM subsystems all in one.> > * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) > - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred.That would be nice.> - only after all basic tests succeeded and a push was done. > > -- > Sander > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel
On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote:> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) > - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. > - only after all basic tests succeeded and a push was done. >We are after this too. I am looking at how to make it possible to do something like that on top of OSSTest. Perf benchmarking is a little bit different from regression smoke-testing, but still I think (hope? :-)) that most of the infrastructure can be reused. Also, what hardware to use and how to properly schedule these kind of "tests" is something that needs a bit more of thinking/discussion, I think. Anyway, although I''m not committing to have all it 100% ready for 4.4, George, feel free to add this to your tracking list and put m name on it. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Friday, August 9, 2013, 1:19:54 PM, you wrote:> On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote: >> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) >> - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. >> - only after all basic tests succeeded and a push was done. >> > We are after this too. I am looking at how to make it possible to do > something like that on top of OSSTest.> Perf benchmarking is a little bit different from regression > smoke-testing, but still I think (hope? :-)) that most of the > infrastructure can be reused.Yes it does require to keep the rest of the circumstances the same for a longer period of time. But it could be quite valuable, slow introduced and minor performance regressions are hard to discover.> Also, what hardware to use and how to properly schedule these kind of > "tests" is something that needs a bit more of thinking/discussion, I > think.Yes since you have to keep the "environment" the same, the machine should only run the perf tests at that time.> Anyway, although I''m not committing to have all it 100% ready for 4.4, > George, feel free to add this to your tracking list and put m name on > it.> Regards, > Dario
On Fri, Aug 09, 2013 at 01:46:42PM +0200, Sander Eikelenboom wrote:> > Friday, August 9, 2013, 1:19:54 PM, you wrote: > > > On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote: > >> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) > >> - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. > >> - only after all basic tests succeeded and a push was done. > >> > > We are after this too. I am looking at how to make it possible to do > > something like that on top of OSSTest. > > > Perf benchmarking is a little bit different from regression > > smoke-testing, but still I think (hope? :-)) that most of the > > infrastructure can be reused. > > Yes it does require to keep the rest of the circumstances the same for a longer period of time. > But it could be quite valuable, slow introduced and minor performance regressions are hard to discover. > > > Also, what hardware to use and how to properly schedule these kind of > > "tests" is something that needs a bit more of thinking/discussion, I > > think. > > Yes since you have to keep the "environment" the same, the machine should only run the perf tests at that time. >This worries me. It is really hard to keep the "environment" the same. AIUI network / block etc. performance can be affected by other kernel subsystems. Also the kernel config could also have impact on performance. Wei.> > Anyway, although I''m not committing to have all it 100% ready for 4.4, > > George, feel free to add this to your tracking list and put m name on > > it. > > > Regards, > > Dario > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel
Friday, August 9, 2013, 2:00:08 PM, you wrote:> On Fri, Aug 09, 2013 at 01:46:42PM +0200, Sander Eikelenboom wrote: >> >> Friday, August 9, 2013, 1:19:54 PM, you wrote: >> >> > On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote: >> >> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) >> >> - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. >> >> - only after all basic tests succeeded and a push was done. >> >> >> > We are after this too. I am looking at how to make it possible to do >> > something like that on top of OSSTest. >> >> > Perf benchmarking is a little bit different from regression >> > smoke-testing, but still I think (hope? :-)) that most of the >> > infrastructure can be reused. >> >> Yes it does require to keep the rest of the circumstances the same for a longer period of time. >> But it could be quite valuable, slow introduced and minor performance regressions are hard to discover. >> >> > Also, what hardware to use and how to properly schedule these kind of >> > "tests" is something that needs a bit more of thinking/discussion, I >> > think. >> >> Yes since you have to keep the "environment" the same, the machine should only run the perf tests at that time. >>> This worries me. It is really hard to keep the "environment" the same. > AIUI network / block etc. performance can be affected by other kernel > subsystems. Also the kernel config could also have impact on > performance.True, though you probably should be able to make out a general trend when putting it in graphs. And add a milestone when you significantly alter the config of a machine.> Wei.>> > Anyway, although I''m not committing to have all it 100% ready for 4.4, >> > George, feel free to add this to your tracking list and put m name on >> > it. >> >> > Regards, >> > Dario >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel
On ven, 2013-08-09 at 14:04 +0200, Sander Eikelenboom wrote:> Friday, August 9, 2013, 2:00:08 PM, you wrote: > > This worries me. It is really hard to keep the "environment" the same. > > AIUI network / block etc. performance can be affected by other kernel > > subsystems. Also the kernel config could also have impact on > > performance. > > True, though you probably should be able to make out a general trend when putting it in graphs. > And add a milestone when you significantly alter the config of a machine. >Indeed. However let''s not run too fast. The final goal is the one you stated in you''re first e-mail. The first step is make it possible to run performance benchmarks on top of OSSTest. That way, even before adding these kind of tests to automated testing, people could use that to investigate the performance impact of some change they''re working on. Let''s figure out the details on whether and how to merge it with the push gate machinery after we have the mechanism itself in place, ok? :-) Sander, I can keep you posted on progresses I make, if you''re interested in that. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Thu, Aug 8, 2013 at 12:01 PM, George Dunlap <George.Dunlap@eu.citrix.com>wrote:> As release coordinator, and therefore de-facto > tracker-of-bugs-and-regressions, it seems to me that one of the > shortcomings in our current development mode is a lack of good > regression testing for many of the less common, but still very > important features of Xen; things like S3, driver domains, nested > virt, and so on. > > We do have a regression-testing push gate for the xen trees, written > by Ian Jackson, called osstest, with a lot of great functionality, > including a test scheduler, an automatic Baysean bisector, and so on. > Apart from more hardware, it is mainly lacking a more complete set of > tests. > > The Xen Project team here at Citrix agreed with me, and have decided > during the 4.4 timeframe to take on the task of adding some important > functional tests to osstest; our list is below. We''d love for you to > join us. > > I will be tracking the implementation of the tests as part of the > regular 4.4 release updates. I encourage people to join us by > thinking about a particular feature or bit of functionality that is > not yet tested by osstest, which you would like to see implemented in > the 4.4 timeframe, and responding to this e-mail, or one of the > regular development update e-mails asking it to be added. > > There is a description of osstest, along with a link to the source code, > here: > > http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ > > === Testing coverage ==> > * Network driver domains > @George > > * new libxl w/ previous versions of xl > @IanJ > > * Host S3 suspend > @bguthro? >I would be happy to contribute, to this effort, but was unsuccessful in getting an osstest setup, when I last tried. I think I would need to be paired with someone who can integrate code into the test harness. Ben> * Default [example] XSM policy > @Stefano to ask Daniel D > > * Xen on ARM > # problem ATM: hardware > @ianc > emulator: @stefano to think about it > > * Storage driver domains > @roger > > * HVM pci passthrough > @anthony > > * Nested virt? > @intel (chased by George) > > * Fix SRIOV test (chase intel) > @ianj > > * Fix bisector to e-mail blame-worthy parties > @ianj > > * Fix xl shutdown > @ianj > > * stub domains > @athony > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On 08/08/13 18:37, Sander Eikelenboom wrote:> Thursday, August 8, 2013, 6:01:01 PM, you wrote: > >> As release coordinator, and therefore de-facto >> tracker-of-bugs-and-regressions, it seems to me that one of the >> shortcomings in our current development mode is a lack of good >> regression testing for many of the less common, but still very >> important features of Xen; things like S3, driver domains, nested >> virt, and so on. >> We do have a regression-testing push gate for the xen trees, written >> by Ian Jackson, called osstest, with a lot of great functionality, >> including a test scheduler, an automatic Baysean bisector, and so on. >> Apart from more hardware, it is mainly lacking a more complete set of >> tests. >> The Xen Project team here at Citrix agreed with me, and have decided >> during the 4.4 timeframe to take on the task of adding some important >> functional tests to osstest; our list is below. We''d love for you to >> join us. >> I will be tracking the implementation of the tests as part of the >> regular 4.4 release updates. I encourage people to join us by >> thinking about a particular feature or bit of functionality that is >> not yet tested by osstest, which you would like to see implemented in >> the 4.4 timeframe, and responding to this e-mail, or one of the >> regular development update e-mails asking it to be added. >> There is a description of osstest, along with a link to the source code, here: >> http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ >> === Testing coverage ==>> * Network driver domains >> @George >> * new libxl w/ previous versions of xl >> @IanJ >> * Host S3 suspend >> @bguthro? >> * Default [example] XSM policy >> @Stefano to ask Daniel D >> * Xen on ARM >> # problem ATM: hardware >> @ianc >> emulator: @stefano to think about it >> * Storage driver domains >> @roger >> * HVM pci passthrough >> @anthony >> * Nested virt? >> @intel (chased by George) >> * Fix SRIOV test (chase intel) >> @ianj >> * Fix bisector to e-mail blame-worthy parties >> @ianj >> * Fix xl shutdown >> @ianj >> * stub domains >> @athony > > Slightly related, > > Is xend/xm support going to be dropped for 4.4 ? > If so, that could spare quite some testing resources, so perhaps nice to know rather sooner than later ?It''s not really clear yet -- I think a proper discussion will happen if/when someone posts a patch to remove it. :-) -George
On 08/08/13 20:40, Konrad Rzeszutek Wilk wrote:> On Thu, Aug 08, 2013 at 05:01:01PM +0100, George Dunlap wrote: >> As release coordinator, and therefore de-facto >> tracker-of-bugs-and-regressions, it seems to me that one of the >> shortcomings in our current development mode is a lack of good >> regression testing for many of the less common, but still very >> important features of Xen; things like S3, driver domains, nested >> virt, and so on. >> >> We do have a regression-testing push gate for the xen trees, written >> by Ian Jackson, called osstest, with a lot of great functionality, >> including a test scheduler, an automatic Baysean bisector, and so on. >> Apart from more hardware, it is mainly lacking a more complete set of >> tests. >> >> The Xen Project team here at Citrix agreed with me, and have decided >> during the 4.4 timeframe to take on the task of adding some important >> functional tests to osstest; our list is below. We''d love for you to >> join us. >> >> I will be tracking the implementation of the tests as part of the >> regular 4.4 release updates. I encourage people to join us by >> thinking about a particular feature or bit of functionality that is >> not yet tested by osstest, which you would like to see implemented in >> the 4.4 timeframe, and responding to this e-mail, or one of the >> regular development update e-mails asking it to be added. >> >> There is a description of osstest, along with a link to the source code, here: >> >> http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ >> >> === Testing coverage ==>> >> * Network driver domains >> @George >> >> * new libxl w/ previous versions of xl >> @IanJ >> >> * Host S3 suspend >> @bguthro? >> >> * Default [example] XSM policy >> @Stefano to ask Daniel D >> >> * Xen on ARM >> # problem ATM: hardware >> @ianc >> emulator: @stefano to think about it >> >> * Storage driver domains >> @roger >> >> * HVM pci passthrough >> @anthony > PV pci passthrough would be nice too. > > You can assign me to it, thought I would need some help in groking the > osstest thingy. Maybe I will just copy what @anthony is doing.I was thinking that would be a subset of the Network driver domains -- but I''ll add it on here, and whoever gets to it first can do it. :-) -George
On 09/08/13 13:00, Wei Liu wrote:> On Fri, Aug 09, 2013 at 01:46:42PM +0200, Sander Eikelenboom wrote: >> Friday, August 9, 2013, 1:19:54 PM, you wrote: >> >>> On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote: >>>> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) >>>> - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. >>>> - only after all basic tests succeeded and a push was done. >>>> >>> We are after this too. I am looking at how to make it possible to do >>> something like that on top of OSSTest. >>> Perf benchmarking is a little bit different from regression >>> smoke-testing, but still I think (hope? :-)) that most of the >>> infrastructure can be reused. >> Yes it does require to keep the rest of the circumstances the same for a longer period of time. >> But it could be quite valuable, slow introduced and minor performance regressions are hard to discover. >> >>> Also, what hardware to use and how to properly schedule these kind of >>> "tests" is something that needs a bit more of thinking/discussion, I >>> think. >> Yes since you have to keep the "environment" the same, the machine should only run the perf tests at that time. >> > This worries me. It is really hard to keep the "environment" the same. > AIUI network / block etc. performance can be affected by other kernel > subsystems. Also the kernel config could also have impact on > performance.The same is true for functional regressions -- a bug could be introduced by Xen, qemu, or the kernel. The tester already keeps track of this, as far as I know, and only moves one thing at a time. -George