Hi all, Long time lurker, first-time question. :) I''ve been going back and forth trying to decide how much detail is enough in my controller specs. I feel that if I put too much detail in them, then my tests become very brittle and do not allow me to refactor my code without significant effort on the tests. After all, I have tests so that I can refactor, right? So, let me show two ways that Ive done controller tests and gain insight from my enlightened colleagues on this list to help me make a decision about which way to go. The first is very general coverage that describes the outcome of the behavior: describe FeedsController, ''get /feeds/'' do before(:each) do @request.env["HTTP_ACCEPT"] = "application/xml" Model.should_receive(:find).with(any_args).and_return mock_model(Model) end it "should return success" do get ''/'' response.should be_success end it "should return 405 (Method Not Allowed) if HTTP_ACCEPT is text/ html" do @request.env["HTTP_ACCEPT"] = "text/html" get ''/'' response.response_code.should == 405 end end The second one is much more detailed: describe FeedsController, ''get /feeds/'' do before(:each) do @model = mock_model(Model) @request.env["HTTP_ACCEPT"] = "application/xml" Model.should_receive(:find).with(any_args).and_return @model end it "should assign to the model" do get ''/'' assigns[:model].should == model end it "should render feed template" do get ''/'' response.should render_template(''feeds/model_feed.xml.erb'') end end Obviously, both are very basic in their implementation, but still, I ask... If you were writing the specs, which way would you write them? Thanks for any guidance. -- Matt Berther http://www.mattberther.com
Hey Matt, The ultimate test would be one that is focused on one thing such that the test would - break every time that thing broke - break only when that thing broke - give detailed feedback enabling you to focus on the thing''s subpart necessary to identify and fix the problem The ultimate test suite would be the set of such tests that covered every single concern that exists in a project. Some of these are concerns are easier to test than others. Some are important and lend themselves to automation, thus enjoy great tool support. Others take something far more abstract like a person''s aesthetic appeal. When considering a new test, I should ask myself what problem the test solves, and what problem I really want the test to solve. I try to write the test in terms of the second. For example, if I were to write the following test: it "should allow deposits and withdrawals" do @account.deposit 80 @account.withdraw 25 @account.balance.should == 55 end The test would be valuable when the problem is that you need to track how much money people have in their accounts. If you faced a problem such as "Make sure the user sees an error when they withdraw more than their balance," you would not want a test like: it "should have an overdraft error" do post :withdrawals, :amount => 1_000_000_000_000 # even bill can''t do this! assigns[:withdrawal].should have_at_least(1).error_on(:amount) end If this test breaks at some point, we would make a change to the test or production code in order to make it pass. We wouldn''t think of the problem it was intended to solve though, because we are absorbed in the problem that initiated the break-inducing change. The danger in this is that the test appears to be robustly covering something useful, but in effect can let problems leak through. Imagine that we remove an important partial from a view. This test, whose ultimate goal was to ensure that users see an error message, failed to catch the problem where the message wasn''t displayed at all. So you should write tests expressing the same level abstraction as the problem you want them to solve. Somewhere you would need a test like: it "should have an overdraft error" do @page.should include("Can''t withdraw more than your balance") end Now you would need some way to get @page. Perhaps you make a request to a controller, it hits a database, renders a response, and assigns the response body to @page. Or maybe you rendered a view using a fake @withdrawal, and you''ve got another test that verifies that you assign @withdrawal in the controller, and another test verifies that Withdrawal objects get an error when you try to create them for an amount greater than the target account balance. And you''ve got a test that you label an Integration test to signal the fact that it integrates all of these pieces. You should only write integration tests that check across valuable boundaries. This does not restrict it to stuff like company-specific code using an ORM framework, though. Because you should only write tests that are valuable, sets of layered tests forms a small subsystem requiring integration testing. It is sometimes useful to have layers of tests that enable you to localize problems. Other times the types of problems you solve will be trivial or obvious and won''t require localization. As a simple rule, more tests == more overhead. But if you''re missing certain tests then you will not notice certain problems when they appear. The art of all of this is identifying the set of tests that maximizes your confidence and ability to produce valuable software. With all that theory out of the way, what can we say about the tests you presented?> describe FeedsController, ''get /feeds/'' do > before(:each) do > @request.env["HTTP_ACCEPT"] = "application/xml" > Model.should_receive(:find).with(any_args).and_return > mock_model(Model) > end > > it "should return success" do > get ''/'' > response.should be_success > end > > it "should return 405 (Method Not Allowed) if HTTP_ACCEPT is text/ > html" do > @request.env["HTTP_ACCEPT"] = "text/html" > get ''/'' > response.response_code.should == 405 > end > endThis test would be good in a situation where we had published an API stating only XML requests were allowed.> The second one is much more detailed: > > describe FeedsController, ''get /feeds/'' do > before(:each) do > @model = mock_model(Model) > @request.env["HTTP_ACCEPT"] = "application/xml" > Model.should_receive(:find).with(any_args).and_return @model > end > > it "should assign to the model" do > get ''/'' > assigns[:model].should == model > end > > it "should render feed template" do > get ''/'' > response.should render_template(''feeds/model_feed.xml.erb'') > end > endThis test would be valuable in a context where the XML feed output is complex. In that case, testing the output directly might not sufficiently enable us to localize issues. If you could write tests that examine the response body, without reducing the clarity of the example group, you should do so. Fewer tests == less overhead.> Obviously, both are very basic in their implementation, but still, I > ask... If you were writing the specs, which way would you write them? > Thanks for any guidance.I hope that, despite the typical "it-all-depends-on-context", I was able to give you some insight into identifying and analyzing possible contexts. Pat
Very nice reply Pat. This would make a great blog post if you get a chance. Thanks Chris On 18 Apr 2008, at 10:15, Pat Maddox wrote:> Hey Matt, > > The ultimate test would be one that is focused on one thing such that > the test would > - break every time that thing broke > - break only when that thing broke > - give detailed feedback enabling you to focus on the thing''s > subpart necessary to identify and fix the problem > > The ultimate test suite would be the set of such tests that covered > every single concern that exists in a project. > > Some of these are concerns are easier to test than others. Some are > important and lend themselves to automation, thus enjoy great tool > support. Others take something far more abstract like a person''s > aesthetic appeal. > > When considering a new test, I should ask myself what problem the > test solves, and what problem I really want the test to solve. I try > to write the test in terms of the second. For example, if I were to > write the following test: > > it "should allow deposits and withdrawals" do > @account.deposit 80 > @account.withdraw 25 > @account.balance.should == 55 > end > > The test would be valuable when the problem is that you need to track > how much money people have in their accounts. > > If you faced a problem such as "Make sure the user sees an error when > they withdraw more than their balance," you would not want a test > like: > > it "should have an overdraft error" do > post :withdrawals, :amount => 1_000_000_000_000 # even bill can''t > do this! > assigns[:withdrawal].should have_at_least(1).error_on(:amount) > end > > If this test breaks at some point, we would make a change to the test > or production code in order to make it pass. We wouldn''t think of the > problem it was intended to solve though, because we are absorbed in > the problem that initiated the break-inducing change. The danger in > this is that the test appears to be robustly covering something > useful, but in effect can let problems leak through. Imagine that we > remove an important partial from a view. This test, whose ultimate > goal was to ensure that users see an error message, failed to catch > the problem where the message wasn''t displayed at all. > > So you should write tests expressing the same level abstraction as the > problem you want them to solve. Somewhere you would need a test like: > > it "should have an overdraft error" do > @page.should include("Can''t withdraw more than your balance") > end > > Now you would need some way to get @page. Perhaps you make a request > to a controller, it hits a database, renders a response, and assigns > the response body to @page. > > Or maybe you rendered a view using a fake @withdrawal, and you''ve got > another test that verifies that you assign @withdrawal in the > controller, and another test verifies that Withdrawal objects get an > error when you try to create them for an amount greater than the > target account balance. And you''ve got a test that you label an > Integration test to signal the fact that it integrates all of these > pieces. > > You should only write integration tests that check across valuable > boundaries. This does not restrict it to stuff like company-specific > code using an ORM framework, though. Because you should only write > tests that are valuable, sets of layered tests forms a small subsystem > requiring integration testing. > > It is sometimes useful to have layers of tests that enable you to > localize problems. Other times the types of problems you solve will > be trivial or obvious and won''t require localization. > > As a simple rule, more tests == more overhead. But if you''re missing > certain tests then you will not notice certain problems when they > appear. The art of all of this is identifying the set of tests that > maximizes your confidence and ability to produce valuable software. > > With all that theory out of the way, what can we say about the tests > you presented? > > >> describe FeedsController, ''get /feeds/'' do >> before(:each) do >> @request.env["HTTP_ACCEPT"] = "application/xml" >> Model.should_receive(:find).with(any_args).and_return >> mock_model(Model) >> end >> >> it "should return success" do >> get ''/'' >> response.should be_success >> end >> >> it "should return 405 (Method Not Allowed) if HTTP_ACCEPT is text/ >> html" do >> @request.env["HTTP_ACCEPT"] = "text/html" >> get ''/'' >> response.response_code.should == 405 >> end >> end > > This test would be good in a situation where we had published an API > stating only XML requests were allowed. > > >> The second one is much more detailed: >> >> describe FeedsController, ''get /feeds/'' do >> before(:each) do >> @model = mock_model(Model) >> @request.env["HTTP_ACCEPT"] = "application/xml" >> Model.should_receive(:find).with(any_args).and_return @model >> end >> >> it "should assign to the model" do >> get ''/'' >> assigns[:model].should == model >> end >> >> it "should render feed template" do >> get ''/'' >> response.should render_template(''feeds/model_feed.xml.erb'') >> end >> end > > This test would be valuable in a context where the XML feed output is > complex. In that case, testing the output directly might not > sufficiently enable us to localize issues. > > If you could write tests that examine the response body, without > reducing the clarity of the example group, you should do so. Fewer > tests == less overhead. > > >> Obviously, both are very basic in their implementation, but still, I >> ask... If you were writing the specs, which way would you write them? >> Thanks for any guidance. > > I hope that, despite the typical "it-all-depends-on-context", I was > able to give you some insight into identifying and analyzing possible > contexts. > > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20080418/e659d243/attachment-0001.html
It seems like I''m constantly making long-winded replies that would be better off in a blog post or in a book. Pat On Fri, Apr 18, 2008 at 2:44 AM, Chris Parsons <chris at edendevelopment.co.uk> wrote:> Very nice reply Pat. This would make a great blog post if you get a chance. > > Thanks > Chris > > > > On 18 Apr 2008, at 10:15, Pat Maddox wrote: > > > Hey Matt, > > The ultimate test would be one that is focused on one thing such that > the test would > - break every time that thing broke > - break only when that thing broke > - give detailed feedback enabling you to focus on the thing''s > subpart necessary to identify and fix the problem > > The ultimate test suite would be the set of such tests that covered > every single concern that exists in a project. > > Some of these are concerns are easier to test than others. Some are > important and lend themselves to automation, thus enjoy great tool > support. Others take something far more abstract like a person''s > aesthetic appeal. > > When considering a new test, I should ask myself what problem the > test solves, and what problem I really want the test to solve. I try > to write the test in terms of the second. For example, if I were to > write the following test: > > it "should allow deposits and withdrawals" do > @account.deposit 80 > @account.withdraw 25 > @account.balance.should == 55 > end > > The test would be valuable when the problem is that you need to track > how much money people have in their accounts. > > If you faced a problem such as "Make sure the user sees an error when > they withdraw more than their balance," you would not want a test > like: > > it "should have an overdraft error" do > post :withdrawals, :amount => 1_000_000_000_000 # even bill can''t do > this! > assigns[:withdrawal].should have_at_least(1).error_on(:amount) > end > > If this test breaks at some point, we would make a change to the test > or production code in order to make it pass. We wouldn''t think of the > problem it was intended to solve though, because we are absorbed in > the problem that initiated the break-inducing change. The danger in > this is that the test appears to be robustly covering something > useful, but in effect can let problems leak through. Imagine that we > remove an important partial from a view. This test, whose ultimate > goal was to ensure that users see an error message, failed to catch > the problem where the message wasn''t displayed at all. > > So you should write tests expressing the same level abstraction as the > problem you want them to solve. Somewhere you would need a test like: > > it "should have an overdraft error" do > @page.should include("Can''t withdraw more than your balance") > end > > Now you would need some way to get @page. Perhaps you make a request > to a controller, it hits a database, renders a response, and assigns > the response body to @page. > > Or maybe you rendered a view using a fake @withdrawal, and you''ve got > another test that verifies that you assign @withdrawal in the > controller, and another test verifies that Withdrawal objects get an > error when you try to create them for an amount greater than the > target account balance. And you''ve got a test that you label an > Integration test to signal the fact that it integrates all of these > pieces. > > You should only write integration tests that check across valuable > boundaries. This does not restrict it to stuff like company-specific > code using an ORM framework, though. Because you should only write > tests that are valuable, sets of layered tests forms a small subsystem > requiring integration testing. > > It is sometimes useful to have layers of tests that enable you to > localize problems. Other times the types of problems you solve will > be trivial or obvious and won''t require localization. > > As a simple rule, more tests == more overhead. But if you''re missing > certain tests then you will not notice certain problems when they > appear. The art of all of this is identifying the set of tests that > maximizes your confidence and ability to produce valuable software. > > With all that theory out of the way, what can we say about the tests > you presented? > > > describe FeedsController, ''get /feeds/'' do > before(:each) do > @request.env["HTTP_ACCEPT"] = "application/xml" > Model.should_receive(:find).with(any_args).and_return > mock_model(Model) > end > > it "should return success" do > get ''/'' > response.should be_success > end > > it "should return 405 (Method Not Allowed) if HTTP_ACCEPT is text/ > html" do > @request.env["HTTP_ACCEPT"] = "text/html" > get ''/'' > response.response_code.should == 405 > end > end > > This test would be good in a situation where we had published an API > stating only XML requests were allowed. > > > The second one is much more detailed: > > describe FeedsController, ''get /feeds/'' do > before(:each) do > @model = mock_model(Model) > @request.env["HTTP_ACCEPT"] = "application/xml" > Model.should_receive(:find).with(any_args).and_return @model > end > > it "should assign to the model" do > get ''/'' > assigns[:model].should == model > end > > it "should render feed template" do > get ''/'' > response.should render_template(''feeds/model_feed.xml.erb'') > end > end > > This test would be valuable in a context where the XML feed output is > complex. In that case, testing the output directly might not > sufficiently enable us to localize issues. > > If you could write tests that examine the response body, without > reducing the clarity of the example group, you should do so. Fewer > tests == less overhead. > > > Obviously, both are very basic in their implementation, but still, I > ask... If you were writing the specs, which way would you write them? > Thanks for any guidance. > > I hope that, despite the typical "it-all-depends-on-context", I was > able to give you some insight into identifying and analyzing possible > contexts. > > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 18 Apr 2008, at 10:44, Chris Parsons wrote:> Very nice reply Pat. This would make a great blog post if you get a > chance.+1 I especially like the line "The art of all of this is identifying the set of tests that maximizes your confidence and ability to produce valuable software." Ashley -- http://www.patchspace.co.uk/ http://aviewfromafar.net/
Hi Pat, Wow. Thank you for a very nice response. As others have said, this would be a stellar blog post. As you stated, it does all depend, but the theory that you provided will help me to determine what exactly I am trying to solve with the particular test. I am definitely saving this email, and I am sure that I''m going to be reading this a few times. Thank you again so much for your thoughts. They really did help me answer some of the questions that have been bothering me and have helped me look at testing differently, most specifically in that there is not a "silver bullet" pattern to writing tests. On Fri, Apr 18, 2008 at 3:15 AM, Pat Maddox <pergesu at gmail.com> wrote:> Hey Matt, > > The ultimate test would be one that is focused on one thing such that > the test would > - break every time that thing broke > - break only when that thing broke > - give detailed feedback enabling you to focus on the thing''s > subpart necessary to identify and fix the problem > > The ultimate test suite would be the set of such tests that covered > every single concern that exists in a project. > > Some of these are concerns are easier to test than others. Some are > important and lend themselves to automation, thus enjoy great tool > support. Others take something far more abstract like a person''s > aesthetic appeal. > > When considering a new test, I should ask myself what problem the > test solves, and what problem I really want the test to solve. I try > to write the test in terms of the second. For example, if I were to > write the following test: > > it "should allow deposits and withdrawals" do > @account.deposit 80 > @account.withdraw 25 > @account.balance.should == 55 > end > > The test would be valuable when the problem is that you need to track > how much money people have in their accounts. > > If you faced a problem such as "Make sure the user sees an error when > they withdraw more than their balance," you would not want a test > like: > > it "should have an overdraft error" do > post :withdrawals, :amount => 1_000_000_000_000 # even bill can''t do this! > assigns[:withdrawal].should have_at_least(1).error_on(:amount) > end > > If this test breaks at some point, we would make a change to the test > or production code in order to make it pass. We wouldn''t think of the > problem it was intended to solve though, because we are absorbed in > the problem that initiated the break-inducing change. The danger in > this is that the test appears to be robustly covering something > useful, but in effect can let problems leak through. Imagine that we > remove an important partial from a view. This test, whose ultimate > goal was to ensure that users see an error message, failed to catch > the problem where the message wasn''t displayed at all. > > So you should write tests expressing the same level abstraction as the > problem you want them to solve. Somewhere you would need a test like: > > it "should have an overdraft error" do > @page.should include("Can''t withdraw more than your balance") > end > > Now you would need some way to get @page. Perhaps you make a request > to a controller, it hits a database, renders a response, and assigns > the response body to @page. > > Or maybe you rendered a view using a fake @withdrawal, and you''ve got > another test that verifies that you assign @withdrawal in the > controller, and another test verifies that Withdrawal objects get an > error when you try to create them for an amount greater than the > target account balance. And you''ve got a test that you label an > Integration test to signal the fact that it integrates all of these > pieces. > > You should only write integration tests that check across valuable > boundaries. This does not restrict it to stuff like company-specific > code using an ORM framework, though. Because you should only write > tests that are valuable, sets of layered tests forms a small subsystem > requiring integration testing. > > It is sometimes useful to have layers of tests that enable you to > localize problems. Other times the types of problems you solve will > be trivial or obvious and won''t require localization. > > As a simple rule, more tests == more overhead. But if you''re missing > certain tests then you will not notice certain problems when they > appear. The art of all of this is identifying the set of tests that > maximizes your confidence and ability to produce valuable software. > > With all that theory out of the way, what can we say about the tests > you presented? > > > > > describe FeedsController, ''get /feeds/'' do > > before(:each) do > > @request.env["HTTP_ACCEPT"] = "application/xml" > > Model.should_receive(:find).with(any_args).and_return > > mock_model(Model) > > end > > > > it "should return success" do > > get ''/'' > > response.should be_success > > end > > > > it "should return 405 (Method Not Allowed) if HTTP_ACCEPT is text/ > > html" do > > @request.env["HTTP_ACCEPT"] = "text/html" > > get ''/'' > > response.response_code.should == 405 > > end > > end > > This test would be good in a situation where we had published an API > stating only XML requests were allowed. > > > > > The second one is much more detailed: > > > > describe FeedsController, ''get /feeds/'' do > > before(:each) do > > @model = mock_model(Model) > > @request.env["HTTP_ACCEPT"] = "application/xml" > > Model.should_receive(:find).with(any_args).and_return @model > > end > > > > it "should assign to the model" do > > get ''/'' > > assigns[:model].should == model > > end > > > > it "should render feed template" do > > get ''/'' > > response.should render_template(''feeds/model_feed.xml.erb'') > > end > > end > > This test would be valuable in a context where the XML feed output is > complex. In that case, testing the output directly might not > sufficiently enable us to localize issues. > > If you could write tests that examine the response body, without > reducing the clarity of the example group, you should do so. Fewer > tests == less overhead. > > > > > Obviously, both are very basic in their implementation, but still, I > > ask... If you were writing the specs, which way would you write them? > > Thanks for any guidance. > > I hope that, despite the typical "it-all-depends-on-context", I was > able to give you some insight into identifying and analyzing possible > contexts. > > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >-- Matt Berther http://www.mattberther.com