hypothetical question for all you BDD experts: I want to make sure that a :list action always returns widgets in alphabetical order. There''s at least 2 ways of doing this: it "should fetch items in alphabetical order" do Widget.should_receive(:find).with(:order => "name ASC") get :list end it "should fetch items in alphabetical order" do [:red, :green, :blue].each {|x| Widget.create(:name => x) } get :list assigns[:widgets].first.name.should == ''blue'' assigns[:widgets].last.name.should == ''red'' end with the first method, I get to mock the important calls and stub the rest, but the example is very closely tied to the implementation. If I change from Widget.find to Widget.find_alphabetically then the example breaks (assuming find_alphabetically() doesn''t use AR::Base.find) with the second method, I''m testing the behaviour more than how it''s implemented. I don''t care what the action does as long as it gives me an array of widgets sorted alphabetically, but I spend more time setting things up and worrying about model validations. In addition, the specs are tied to a db. so which is the better method, and is there another way i havn''t considered that gives me the best of both worlds? -- View this message in context: http://www.nabble.com/testing-behaviour-or-testing-code--tf4322619.html#a12309322 Sent from the rspec-users mailing list archive at Nabble.com.
On 8/24/07, David Green <justnothing at tiscali.co.uk> wrote:> > hypothetical question for all you BDD experts: > > I want to make sure that a :list action always returns widgets in > alphabetical order. There''s at least 2 ways of doing this: > > it "should fetch items in alphabetical order" do > Widget.should_receive(:find).with(:order => "name ASC") > get :list > end > > it "should fetch items in alphabetical order" do > [:red, :green, :blue].each {|x| Widget.create(:name => x) } > get :list > assigns[:widgets].first.name.should == ''blue'' > assigns[:widgets].last.name.should == ''red'' > end > > with the first method, I get to mock the important calls and stub the rest, > but the example is very closely tied to the implementation. If I change from > Widget.find to Widget.find_alphabetically then the example breaks (assuming > find_alphabetically() doesn''t use AR::Base.find) > > with the second method, I''m testing the behaviour more than how it''s > implemented. I don''t care what the action does as long as it gives me an > array of widgets sorted alphabetically, but I spend more time setting things > up and worrying about model validations. In addition, the specs are tied to > a db. > > so which is the better method, and is there another way i havn''t considered > that gives me the best of both worlds?It depends on how high you have your magnifying glass set. Really! Here''s how I''d get there: In an integration test, which I use as ... well ... integration tests (i.e. pretty close to end to end - just no browser, so the javascript can''t get tested), I''d have something akin to the second example, except that the creates would be done through a controller. This would be in place before I ever started working on individual objects. Then I''d develop the view, followed by the controller, followed by the model. Typically, in my experience, that would result in something like (not executing these so please pardon any potential bugs): describe "/widgets/index" do it "should display a list of widgets" do assigns[:widgets] = [ mock_model(Widget, :name => ''foo''), mock_model(Widget, :name => ''bar'') ] render ''/widgets/index'' response.should have_tag(''ul'') do with_tag(''li'', ''foo'') with_tag(''li'', ''bar'') end end end describe WidgetController, ''responding to GET /widgets'' do it "should assign a list of widgets" do Widget.should_receive(:find_alphabetically).and_return(list = []) get :index assigns[:widgets].should == [] end end describe Widget, "class" do it "should provide a list of widgets sorted alphabetically" do Widget.should_receive(:find).with(:order => "name ASC") Widget.find_alphabetically end end You''re correct that the refactoring requires you to change the object-level examples, and that is something that would be nice to avoid. But also keep in mind that in java and C# people refactor things like that all the time without batting an eye, because the tools make it a one-step activity. Refactoring is changing the design of your *system* without changing its behaviour. That doesn''t really fly all the way down to the object level 100% of the time. WDYT? David> > -- > View this message in context: http://www.nabble.com/testing-behaviour-or-testing-code--tf4322619.html#a12309322 > Sent from the rspec-users mailing list archive at Nabble.com. > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 8/24/07, David Green <justnothing at tiscali.co.uk> wrote:> > hypothetical question for all you BDD experts: > > I want to make sure that a :list action always returns widgets in > alphabetical order. There''s at least 2 ways of doing this: > > > it "should fetch items in alphabetical order" do > Widget.should_receive(:find).with(:order => "name ASC") > get :list > end > > it "should fetch items in alphabetical order" do > [:red, :green, :blue].each {|x| Widget.create(:name => x) } > get :list > assigns[:widgets].first.name.should == ''blue'' > assigns[:widgets].last.name.should == ''red'' > end > > with the first method, I get to mock the important calls and stub the rest, > but the example is very closely tied to the implementation. If I change from > Widget.find to Widget.find_alphabetically then the example breaks (assuming > find_alphabetically() doesn''t use AR::Base.find) > > with the second method, I''m testing the behaviour more than how it''s > implemented. I don''t care what the action does as long as it gives me an > array of widgets sorted alphabetically, but I spend more time setting things > up and worrying about model validations. In addition, the specs are tied to > a db. > > so which is the better method, and is there another way i havn''t considered > that gives me the best of both worlds?In your controller have something like: @widgets = mock("widgets") Widget.should_receive(:find_alphabetically).and_return(@widgets) get :index You don''t care in your controller test what actually gets returned, just that it''s calling the right method on the model. Now in your "Widget" spec have one that looks like: describe Widget, "#find_alphabetically) do before do Widget.destroy_all # create some widgets for your test, say widgets C, A, B in that order @results = Widget.find_alphabetically end it "has widget A as the first widget" do # ... end it "has widget B as the second widget" do # ... end it "has widget C as the third widget" do # ... end end HTH, Zach
On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote:> describe Widget, "class" do > it "should provide a list of widgets sorted alphabetically" do > Widget.should_receive(:find).with(:order => "name ASC") > Widget.find_alphabetically > end > end > > You''re correct that the refactoring requires you to change the > object-level examples, and that is something that would be nice to > avoid. But also keep in mind that in java and C# people refactor > things like that all the time without batting an eye, because the > tools make it a one-step activity. Refactoring is changing the design > of your *system* without changing its behaviour. That doesn''t really > fly all the way down to the object level 100% of the time. > > WDYT?I think that example is fine up until the model spec. The find_alphabetically example should hit the db, imo. With the current spec there''s no way to know whether find_alphabetically actually works or not. You''re relying on knowledge of ActiveRecord here, trusting that the arguments to find are correct. What I''ve found when I write specs is that I discover new layers of services until eventually I get to a layer that actually does something. When I get there, it''s important to have specs that describe what it does, not how it does it. In the case of find_alphabetically we care that it returns the items in alphabetical order. Not that it makes a certain call to the db. Pat
On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote:> On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > describe Widget, "class" do > > it "should provide a list of widgets sorted alphabetically" do > > Widget.should_receive(:find).with(:order => "name ASC") > > Widget.find_alphabetically > > end > > end > > > > You''re correct that the refactoring requires you to change the > > object-level examples, and that is something that would be nice to > > avoid. But also keep in mind that in java and C# people refactor > > things like that all the time without batting an eye, because the > > tools make it a one-step activity. Refactoring is changing the design > > of your *system* without changing its behaviour. That doesn''t really > > fly all the way down to the object level 100% of the time. > > > > WDYT? > > I think that example is fine up until the model spec. The > find_alphabetically example should hit the db, imo. With the current > spec there''s no way to know whether find_alphabetically actually works > or not. You''re relying on knowledge of ActiveRecord here, trusting > that the arguments to find are correct.Au contrare! This all starts with an Integration Test. I didn''t post the code but I did mention it.> What I''ve found when I write specs is that I discover new layers of > services until eventually I get to a layer that actually does > something. When I get there, it''s important to have specs that > describe what it does, not how it does it. In the case of > find_alphabetically we care that it returns the items in alphabetical > order. Not that it makes a certain call to the db.I play this both ways and haven''t come to a preference, but I''m leaning towards blocking database access from the rspec examples and only allowing it my end to end tests (using Rails Integration Tests or - soon - RSpec''s new Story Runner).> > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote:> On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > describe Widget, "class" do > > > it "should provide a list of widgets sorted alphabetically" do > > > Widget.should_receive(:find).with(:order => "name ASC") > > > Widget.find_alphabetically > > > end > > > end > > > > > > You''re correct that the refactoring requires you to change the > > > object-level examples, and that is something that would be nice to > > > avoid. But also keep in mind that in java and C# people refactor > > > things like that all the time without batting an eye, because the > > > tools make it a one-step activity. Refactoring is changing the design > > > of your *system* without changing its behaviour. That doesn''t really > > > fly all the way down to the object level 100% of the time. > > > > > > WDYT? > > > > I think that example is fine up until the model spec. The > > find_alphabetically example should hit the db, imo. With the current > > spec there''s no way to know whether find_alphabetically actually works > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > that the arguments to find are correct. > > Au contrare! This all starts with an Integration Test. I didn''t post > the code but I did mention it.You''re absolutely right, there should be an integration or acceptance test that exercises the behavior. I would question then whether or not the example for find_alphabetically is (a) pulling its weight or (b) too brittle. (a) What value does the example provide? It doesn''t serve to document how find_alphabetically is used (usage doco is provided by good naming, and secondarily by the controller specs). It doesn''t give you any information that you couldn''t get by looking at the implementation, because it duplicates the implementation exactly. So the only real benefits of it is that you can see it when you visually scan the specs, and it shows up in the output when you generate spec docs. Those are real benefits, of course, which leads me to believe that the spec is just a bit brittle. Knowing what exact arguments are passed to Widget.find doesn''t add any value. It makes the test more cluttered and brittle. All we really care about is that a find is performed. In that case, perhaps the example should be simply it "should provide a list of widgets sorted alphabetically" do Widget.should_receive(:find) Widget.find_alphabetically end WDYT?> I play this both ways and haven''t come to a preference, but I''m > leaning towards blocking database access from the rspec examples and > only allowing it my end to end tests (using Rails Integration Tests or > - soon - RSpec''s new Story Runner).Will Story Runner give us all the same abilities as Rails ITs, obviating the need for test::unit altogether? Pat
On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote:> On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > > describe Widget, "class" do > > > > it "should provide a list of widgets sorted alphabetically" do > > > > Widget.should_receive(:find).with(:order => "name ASC") > > > > Widget.find_alphabetically > > > > end > > > > end > > > > > > > > You''re correct that the refactoring requires you to change the > > > > object-level examples, and that is something that would be nice to > > > > avoid. But also keep in mind that in java and C# people refactor > > > > things like that all the time without batting an eye, because the > > > > tools make it a one-step activity. Refactoring is changing the design > > > > of your *system* without changing its behaviour. That doesn''t really > > > > fly all the way down to the object level 100% of the time. > > > > > > > > WDYT? > > > > > > I think that example is fine up until the model spec. The > > > find_alphabetically example should hit the db, imo. With the current > > > spec there''s no way to know whether find_alphabetically actually works > > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > > that the arguments to find are correct. > > > > Au contrare! This all starts with an Integration Test. I didn''t post > > the code but I did mention it. > > You''re absolutely right, there should be an integration or acceptance > test that exercises the behavior. I would question then whether or > not the example for find_alphabetically is (a) pulling its weight or > (b) too brittle. > > (a) What value does the example provide? It doesn''t serve to document > how find_alphabetically is used (usage doco is provided by good > naming, and secondarily by the controller specs). It doesn''t give you > any information that you couldn''t get by looking at the > implementation, because it duplicates the implementation exactly. So > the only real benefits of it is that you can see it when you visually > scan the specs, and it shows up in the output when you generate spec > docs. > > Those are real benefits, of course, which leads me to believe that the > spec is just a bit brittle. Knowing what exact arguments are passed > to Widget.find doesn''t add any value. It makes the test more > cluttered and brittle. All we really care about is that a find is > performed. In that case, perhaps the example should be simply > > it "should provide a list of widgets sorted alphabetically" do > Widget.should_receive(:find) > Widget.find_alphabetically > end > > WDYT?The problem w/ that, for me, is that if I change that method for any reason I won''t know if I broke anything until I run the integration tests. I''ll trade off a bit of brittleness for rapid feedback. Not always - but usually.> > > I play this both ways and haven''t come to a preference, but I''m > > leaning towards blocking database access from the rspec examples and > > only allowing it my end to end tests (using Rails Integration Tests or > > - soon - RSpec''s new Story Runner). > > Will Story Runner give us all the same abilities as Rails ITs, > obviating the need for test::unit altogether?Yes. Just need to figure out how to wrap IT w/ Story Runner. Cheers, David> > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
David Chelimsky-2 wrote:> > > It depends on how high you have your magnifying glass set. Really! > > Here''s how I''d get there: > > In an integration test, which I use as ... well ... integration tests > (i.e. pretty close to end to end - just no browser, so the javascript > can''t get tested), I''d have something akin to the second example, > except that the creates would be done through a controller. This would > be in place before I ever started working on individual objects. > > Then I''d develop the view, followed by the controller, followed by the > model. Typically, in my experience, that would result in something > like (not executing these so please pardon any potential bugs): > > describe "/widgets/index" do > it "should display a list of widgets" do > assigns[:widgets] = [ > mock_model(Widget, :name => ''foo''), > mock_model(Widget, :name => ''bar'') > ] > render ''/widgets/index'' > response.should have_tag(''ul'') do > with_tag(''li'', ''foo'') > with_tag(''li'', ''bar'') > end > end > end > > describe WidgetController, ''responding to GET /widgets'' do > it "should assign a list of widgets" do > Widget.should_receive(:find_alphabetically).and_return(list = []) > get :index > assigns[:widgets].should == [] > end > end > > describe Widget, "class" do > it "should provide a list of widgets sorted alphabetically" do > Widget.should_receive(:find).with(:order => "name ASC") > Widget.find_alphabetically > end > end > > You''re correct that the refactoring requires you to change the > object-level examples, and that is something that would be nice to > avoid. But also keep in mind that in java and C# people refactor > things like that all the time without batting an eye, because the > tools make it a one-step activity. Refactoring is changing the design > of your *system* without changing its behaviour. That doesn''t really > fly all the way down to the object level 100% of the time. > > WDYT? > > David >after reading your post yesterday, I dug out some old specs that were doing some really complex setup using real objects, and rewrote them to exclusively use mocks and stubs. The specs run around 20% quicker, but more importantly, the code is much less complex and much easier to work with! it''s a relief not having to worry about model behaviour in controller specs. so much so that I ended up adding around 50% more examples and catching some bugs which I''d missed. I wasn''t testing my views at all, instead relying on integrate_views to catch any problems. This time round I wrote view specs, which is a little more work but testing only one MVC aspect in isolation really makes things simpler. I realise now that the old way, I was using controller specs to test integration rather than controllers. I''m relatively new to programming, and it''s all self taught so I can''t speak with authority, but the more I use BDD, the more I like it. It just makes sense. thanks for your help p.s. when is the book coming? :) -- View this message in context: http://www.nabble.com/testing-behaviour-or-testing-code--tf4322619.html#a12323713 Sent from the rspec-users mailing list archive at Nabble.com.
On Sat, 2007-08-25 at 00:01 -0700, David Green wrote:> I wasn''t testing my views at all, instead relying on integrate_views to > catch any problems. This time round I wrote view specs, which is a little > more work but testing only one MVC aspect in isolation really makes things > simpler. I realise now that the old way, I was using controller specs to > test integration rather than controllers.I''m actually doing a bit of both. I write all my controller specs without integrate_views, with separate specs for the views. On top of that I''m also including very simple specs for each action in the controller including views and relying on fixtures, mostly like this: it ''should be a valid page'' do get :index response.should be_xhtml end Even though all the behaviour is tested without views and fixtures, this additional check helps to find problems in the interaction between views, controller, and model, and it is the only way I know to validate the pages as XHTML. Kind regards, Hans -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://rubyforge.org/pipermail/rspec-users/attachments/20070825/d46a0002/attachment.bin
On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote:> On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > describe Widget, "class" do > > > it "should provide a list of widgets sorted alphabetically" do > > > Widget.should_receive(:find).with(:order => "name ASC") > > > Widget.find_alphabetically > > > end > > > end > > > > > > You''re correct that the refactoring requires you to change the > > > object-level examples, and that is something that would be nice to > > > avoid. But also keep in mind that in java and C# people refactor > > > things like that all the time without batting an eye, because the > > > tools make it a one-step activity. Refactoring is changing the design > > > of your *system* without changing its behaviour. That doesn''t really > > > fly all the way down to the object level 100% of the time. > > > > > > WDYT? > > > > I think that example is fine up until the model spec. The > > find_alphabetically example should hit the db, imo. With the current > > spec there''s no way to know whether find_alphabetically actually works > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > that the arguments to find are correct. > > Au contrare! This all starts with an Integration Test. I didn''t post > the code but I did mention it. > > > What I''ve found when I write specs is that I discover new layers of > > services until eventually I get to a layer that actually does > > something. When I get there, it''s important to have specs that > > describe what it does, not how it does it. In the case of > > find_alphabetically we care that it returns the items in alphabetical > > order. Not that it makes a certain call to the db. > > I play this both ways and haven''t come to a preference, but I''m > leaning towards blocking database access from the rspec examples and > only allowing it my end to end tests (using Rails Integration Tests or > - soon - RSpec''s new Story Runner).Now that I''ve had a chance to play with Story Runner, I want to revisit this topic a bit. Let''s say in your example you wanted to refactor find_alphabetically to use enumerable''s sort_by to do the sorting. def self.find_alphabetically find(:all).sort_by {|w| w.name } end Your model spec will fail, but your integration test will still pass. I''ve been thinking about this situation a lot over the last few months. It''s been entirely theoretical because I haven''t had a suite of integration tests ;) Most XP advocates lean heavily on unit tests when doing refactoring. Mocking tends to get in the way of refactoring though. In the example above, we rely on the integration test to give us confidence while refactoring. In fact I would ignore the unit test (model-level spec) altogether, and rewrite it when the refactoring is complete. Here''s how I reconcile this with traditional XP unit testing. First of all our integration tests are relatively light weight. In a web app, a user story consists of making a request and verifying the response. Authentication included, you''ll be making at most 3-5 HTTP requests per test. This means that our integration tests still run in just a few seconds. Integration tests in a Rails app are a completely different beast from the integration tests in the Chrysler payroll app that Beck, Jeffries, et al worked on. The second point of reconciliation is that mock objects and refactoring are two distinct tools you use to design your code. When I''m writing greenfield code I''ll use mocks to drive the design. When I refactor though, I''m following known steps to improve the design of my existing code. The vast majority of the time I will perform a known refactoring, which means I know the steps and the resulting design. In this situation I''ll ignore my model specs because they''ll blow up, giving me no information other than I changed the design of my code. I can use the integration tests to ensure that I haven''t broken any behavior. At this point I would edit the model specs to use the correct mock calls. As I mentioned, this has been something that''s been on my mind for a while. I find mock objects to be very useful, but they seem to clash with most of the existing TDD and XP literature. To summarize, here are the points where I think they clash: * Classical TDD relies on unit tests for confidence in refactoring. BDD relies on integration tests * XP acceptance tests are customer tests, whereas RSpec User Stories are programmer tests. They can serve a dual-purpose because you can easily show them to a customer, but they''re programmer tests in the sense that the programmer writes and is responsible for those particular tests. In the end it boils down to getting stuff done. After a bit of experimentation I''m thinking that the process of 1. Write a user story 2. Write detailed specs using mocks to drive design 3. Refactor, using stories to ensure that expected behavior is maintained, ignoring detailed specs 4. Retrofit specs with correct mock expectations is a solid approach. I''d like others to weigh in with their thoughts. Pat
On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote:> On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > > describe Widget, "class" do > > > > it "should provide a list of widgets sorted alphabetically" do > > > > Widget.should_receive(:find).with(:order => "name ASC") > > > > Widget.find_alphabetically > > > > end > > > > end > > > > > > > > You''re correct that the refactoring requires you to change the > > > > object-level examples, and that is something that would be nice to > > > > avoid. But also keep in mind that in java and C# people refactor > > > > things like that all the time without batting an eye, because the > > > > tools make it a one-step activity. Refactoring is changing the design > > > > of your *system* without changing its behaviour. That doesn''t really > > > > fly all the way down to the object level 100% of the time. > > > > > > > > WDYT? > > > > > > I think that example is fine up until the model spec. The > > > find_alphabetically example should hit the db, imo. With the current > > > spec there''s no way to know whether find_alphabetically actually works > > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > > that the arguments to find are correct. > > > > Au contrare! This all starts with an Integration Test. I didn''t post > > the code but I did mention it. > > > > > What I''ve found when I write specs is that I discover new layers of > > > services until eventually I get to a layer that actually does > > > something. When I get there, it''s important to have specs that > > > describe what it does, not how it does it. In the case of > > > find_alphabetically we care that it returns the items in alphabetical > > > order. Not that it makes a certain call to the db. > > > > I play this both ways and haven''t come to a preference, but I''m > > leaning towards blocking database access from the rspec examples and > > only allowing it my end to end tests (using Rails Integration Tests or > > - soon - RSpec''s new Story Runner). > > Now that I''ve had a chance to play with Story Runner, I want to > revisit this topic a bit. > > Let''s say in your example you wanted to refactor find_alphabetically > to use enumerable''s sort_by to do the sorting. > > def self.find_alphabetically > find(:all).sort_by {|w| w.name } > end > > Your model spec will fail, but your integration test will still pass. > > I''ve been thinking about this situation a lot over the last few > months. It''s been entirely theoretical because I haven''t had a suite > of integration tests ;) Most XP advocates lean heavily on unit tests > when doing refactoring. Mocking tends to get in the way of > refactoring though. In the example above, we rely on the integration > test to give us confidence while refactoring. In fact I would ignore > the unit test (model-level spec) altogether, and rewrite it when the > refactoring is complete. > > Here''s how I reconcile this with traditional XP unit testing. First > of all our integration tests are relatively light weight. In a web > app, a user story consists of making a request and verifying the > response. Authentication included, you''ll be making at most 3-5 HTTP > requests per test. This means that our integration tests still run in > just a few seconds. Integration tests in a Rails app are a completely > different beast from the integration tests in the Chrysler payroll app > that Beck, Jeffries, et al worked on. > > The second point of reconciliation is that mock objects and > refactoring are two distinct tools you use to design your code. When > I''m writing greenfield code I''ll use mocks to drive the design. When > I refactor though, I''m following known steps to improve the design of > my existing code. The vast majority of the time I will perform a > known refactoring, which means I know the steps and the resulting > design. In this situation I''ll ignore my model specs because they''ll > blow up, giving me no information other than I changed the design of > my code. I can use the integration tests to ensure that I haven''t > broken any behavior. At this point I would edit the model specs to > use the correct mock calls. > > As I mentioned, this has been something that''s been on my mind for a > while. I find mock objects to be very useful, but they seem to clash > with most of the existing TDD and XP literature. To summarize, here > are the points where I think they clash: > > * Classical TDD relies on unit tests for confidence in refactoring. > BDD relies on integration tests > * XP acceptance tests are customer tests, whereas RSpec User Stories > are programmer tests. They can serve a dual-purpose because you can > easily show them to a customer, but they''re programmer tests in the > sense that the programmer writes and is responsible for those > particular tests. > > In the end it boils down to getting stuff done. After a bit of > experimentation I''m thinking that the process of > 1. Write a user story > 2. Write detailed specs using mocks to drive design > 3. Refactor, using stories to ensure that expected behavior is > maintained, ignoring detailed specs > 4. Retrofit specs with correct mock expectations > > is a solid approach. I''d like others to weigh in with their thoughts.Hey Pat, I really appreciate that you''re thinking about and sharing this as its something that weighs on a lot of people''s minds and it''s clear that you have some understanding of the XP context in which all of this was born. That said, I see this quite a bit differently. I don''t think this has anything to do w/ TDD vs BDD. "Mock Objects" is not a BDD concept. It just feels that way because we talk more about interaction testing, but interaction testing predates BDD by some years. The problem we experience with mocks relates to the fact that we''ve chosen to live in the beautiful, free, dynamically typed and POORLY TOOLED land of Ruby. When Ruby refactoring tools catch up with those of java and .NET, this pain will all go away. For example - if I''m in IntelliJ in a java project and I have a method like this: model.getName() and I''m using jmock (the old version), which uses Strings for method names: model.expects(once()).method("getName").will(returnValue("stub value")) and I do a Rename Method refactoring on getName(), IntelliJ will ask me if I want to change the strings it finds that match getName as well as the method invocations. In Ruby, we do this now w/ search and replace. Not quite as elegant. But under the hood, that''s all IntelliJ is doing. It just makes it feel like an integrated step of an automated refactoring. re: Story Runner. The intent of Story Runner is exactly the same as tools like FIT, etc, that are typically found in the Acceptance Testing space in XP projects. In my experience using FitNesse, it was rare that a customer actually added new tests to a suite. If there were testing folks on board, they would do it (and they would likely be equipped to do it in Story Runner as well), but if not, then the FitNesse tests were at best the result of a collaborative session with the customer and, at worst, our (developers), interpretation of conversations we had had with the customer. I see Story Runner fitting in exactly like that in the short run. I can also see external DSLs emerging that let customers actually write the outputs that Story Runner should produce and run that through a process that writes what we''re writing now in Story Runner. But that''s probably some time off. I totally agree with your last statement that "it boils down to getting stuff done." And your approach seems to be the approach that I take, given the tools that we have. But I really think its about tools and not process. And I think that BDD is a lot more like what really experienced TDD''ers do out of the gate. We''re just choosing different words and structures to make it easier to communicate across roles on a team (customer, developer, tester, etc). FWIW. Cheers, David> > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 9/2/07, David Chelimsky <dchelimsky at gmail.com> wrote:> On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote: > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > > > describe Widget, "class" do > > > > > it "should provide a list of widgets sorted alphabetically" do > > > > > Widget.should_receive(:find).with(:order => "name ASC") > > > > > Widget.find_alphabetically > > > > > end > > > > > end > > > > > > > > > > You''re correct that the refactoring requires you to change the > > > > > object-level examples, and that is something that would be nice to > > > > > avoid. But also keep in mind that in java and C# people refactor > > > > > things like that all the time without batting an eye, because the > > > > > tools make it a one-step activity. Refactoring is changing the design > > > > > of your *system* without changing its behaviour. That doesn''t really > > > > > fly all the way down to the object level 100% of the time. > > > > > > > > > > WDYT? > > > > > > > > I think that example is fine up until the model spec. The > > > > find_alphabetically example should hit the db, imo. With the current > > > > spec there''s no way to know whether find_alphabetically actually works > > > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > > > that the arguments to find are correct. > > > > > > Au contrare! This all starts with an Integration Test. I didn''t post > > > the code but I did mention it. > > > > > > > What I''ve found when I write specs is that I discover new layers of > > > > services until eventually I get to a layer that actually does > > > > something. When I get there, it''s important to have specs that > > > > describe what it does, not how it does it. In the case of > > > > find_alphabetically we care that it returns the items in alphabetical > > > > order. Not that it makes a certain call to the db. > > > > > > I play this both ways and haven''t come to a preference, but I''m > > > leaning towards blocking database access from the rspec examples and > > > only allowing it my end to end tests (using Rails Integration Tests or > > > - soon - RSpec''s new Story Runner). > > > > Now that I''ve had a chance to play with Story Runner, I want to > > revisit this topic a bit. > > > > Let''s say in your example you wanted to refactor find_alphabetically > > to use enumerable''s sort_by to do the sorting. > > > > def self.find_alphabetically > > find(:all).sort_by {|w| w.name } > > end > > > > Your model spec will fail, but your integration test will still pass. > > > > I''ve been thinking about this situation a lot over the last few > > months. It''s been entirely theoretical because I haven''t had a suite > > of integration tests ;) Most XP advocates lean heavily on unit tests > > when doing refactoring. Mocking tends to get in the way of > > refactoring though. In the example above, we rely on the integration > > test to give us confidence while refactoring. In fact I would ignore > > the unit test (model-level spec) altogether, and rewrite it when the > > refactoring is complete. > > > > Here''s how I reconcile this with traditional XP unit testing. First > > of all our integration tests are relatively light weight. In a web > > app, a user story consists of making a request and verifying the > > response. Authentication included, you''ll be making at most 3-5 HTTP > > requests per test. This means that our integration tests still run in > > just a few seconds. Integration tests in a Rails app are a completely > > different beast from the integration tests in the Chrysler payroll app > > that Beck, Jeffries, et al worked on. > > > > The second point of reconciliation is that mock objects and > > refactoring are two distinct tools you use to design your code. When > > I''m writing greenfield code I''ll use mocks to drive the design. When > > I refactor though, I''m following known steps to improve the design of > > my existing code. The vast majority of the time I will perform a > > known refactoring, which means I know the steps and the resulting > > design. In this situation I''ll ignore my model specs because they''ll > > blow up, giving me no information other than I changed the design of > > my code. I can use the integration tests to ensure that I haven''t > > broken any behavior. At this point I would edit the model specs to > > use the correct mock calls. > > > > As I mentioned, this has been something that''s been on my mind for a > > while. I find mock objects to be very useful, but they seem to clash > > with most of the existing TDD and XP literature. To summarize, here > > are the points where I think they clash: > > > > * Classical TDD relies on unit tests for confidence in refactoring. > > BDD relies on integration tests > > * XP acceptance tests are customer tests, whereas RSpec User Stories > > are programmer tests. They can serve a dual-purpose because you can > > easily show them to a customer, but they''re programmer tests in the > > sense that the programmer writes and is responsible for those > > particular tests. > > > > In the end it boils down to getting stuff done. After a bit of > > experimentation I''m thinking that the process of > > 1. Write a user story > > 2. Write detailed specs using mocks to drive design > > 3. Refactor, using stories to ensure that expected behavior is > > maintained, ignoring detailed specs > > 4. Retrofit specs with correct mock expectations > > > > is a solid approach. I''d like others to weigh in with their thoughts. > > Hey Pat, > > I really appreciate that you''re thinking about and sharing this as its > something that weighs on a lot of people''s minds and it''s clear that > you have some understanding of the XP context in which all of this was > born. > > That said, I see this quite a bit differently. > > I don''t think this has anything to do w/ TDD vs BDD. "Mock Objects" is > not a BDD concept. It just feels that way because we talk more about > interaction testing, but interaction testing predates BDD by some > years.Hi David, Thanks so much for your thoughtful reply. You''re right, and I didn''t mean to suggest that mock objects were a BDD concept at all. However it seems to me that BDDers embrace mock objects as a very useful design tool, whereas classical TDDers would use them sparsely, when a resource is expensive or difficult to use directly. For example, Beck talks about mocking a database in his book, and that''s that. Astels demonstrates mocking the roll of a die. He does briefly use mocks before he''s ready to implement the GUI part of the app. Those are the two TDD books with which I''m most familiar. I''m sure a lot has changed in the TDD community since then, and indeed you can see that Astels'' mentality has changed somewhat. His "one assertion per test" article [1] parses an address and then verifies it by asserting the getters. His remake, "one expectation per example" [2] is a bit different in that he passes a mocked builder in and uses that to verify that the parsing code works, exposing no getters at all. That to me signifies a fundamental shift in TDD thought. Instead of thinking about objects in isolation and what services they provide, we think of the services an object provides and how it interacts with other objects and uses their services. I''m certain that it''s not a new way of thinking, but hopefully you can see why I''d believe it''s probably not mainstream. There''s one other roadblock to my thinking, and it results from using RSpec almost exclusively within Rails projects. I think it''s obvious why you mock models when writing view and controller specs. However less obvious to me is why mock associations in model specs, and I think it has to do with the fact that AR couples business and persistence logic. If we just had domain objects that never hit a database, then we might initially mock interactions but then use concrete instances when we later implemented those classes. When I think of Beck''s Money example, or Martin Fowler''s video rental list in Refactoring, it seems silly to me to use mocks in those cases. Perhaps you might at the very beginning, but you''d sub real objects in as you implemented them. We don''t do this with AR because they''re simply too heavy. This culminates in another general idea I''ve had which is to mock services in a lower layer, and use concrete instances for objects in the same layer when possible. If we were to split AR into domain objects and a data access layer, the domain objects would mock calls to the data access layer but use concrete domain objects in the tests. The unit tests remain fast and simple, and mocks no longer get in the way of refactoring. Of course then you''re writing integration tests at a fairly low level I guess, but that''s 100% acceptable to me in the interest of getting stuff done rather than being dogmatic.> The problem we experience with mocks relates to the fact that > we''ve chosen to live in the beautiful, free, dynamically typed and > POORLY TOOLED land of Ruby. When Ruby refactoring tools catch up with > those of java and .NET, this pain will all go away. > > For example - if I''m in IntelliJ in a java project and I have a method > like this: > > model.getName() > > and I''m using jmock (the old version), which uses Strings for method names: > > model.expects(once()).method("getName").will(returnValue("stub value")) > > and I do a Rename Method refactoring on getName(), IntelliJ will ask > me if I want to change the strings it finds that match getName as well > as the method invocations. > > In Ruby, we do this now w/ search and replace. Not quite as elegant. > But under the hood, that''s all IntelliJ is doing. It just makes it > feel like an integrated step of an automated refactoring.Agreed. I guess for me it''s easier to get the production code right and then fix the tests after the fact. I''d hate to do all the work of changing the production and test code and then find out it was incorrect. Fixing tests after fixing the production code amounts to the same work as doing it all in one step, because as you mentioned it''s essentially a manual process.> re: Story Runner. The intent of Story Runner is exactly the same as > tools like FIT, etc, that are typically found in the Acceptance > Testing space in XP projects. In my experience using FitNesse, it was > rare that a customer actually added new tests to a suite. If there > were testing folks on board, they would do it (and they would likely > be equipped to do it in Story Runner as well), but if not, then the > FitNesse tests were at best the result of a collaborative session with > the customer and, at worst, our (developers), interpretation of > conversations we had had with the customer. > > I see Story Runner fitting in exactly like that in the short run. I > can also see external DSLs emerging that let customers actually write > the outputs that Story Runner should produce and run that through a > process that writes what we''re writing now in Story Runner. But that''s > probably some time off. > > I totally agree with your last statement that "it boils down to > getting stuff done." And your approach seems to be the approach that I > take, given the tools that we have. But I really think its about tools > and not process. And I think that BDD is a lot more like what really > experienced TDD''ers do out of the gate. We''re just choosing different > words and structures to make it easier to communicate across roles on > a team (customer, developer, tester, etc).So "ideally," who would write Story Runner stories? I put it in quotes because I think it would differ greatly depending on the work environment, what kind of level of interaction you have with the customer, etc. Using TDD terms, would we consider SR stories to be Customer or Developer tests? I gather from your insight that they''re Customer tests. Finally I agree 100% on not focusing on process. I''m trying to figure out the most effective process given the tools currently available, and will be constantly changing it as more/better tools come along. Although I suppose what I should really be spending my energy on is building the tools that will make all our lives better ;) Pat
Rodrigo Alvarez Fernández
2007-Sep-02 10:28 UTC
[rspec-users] testing behaviour or testing code?
I have an article<http://papipo.blogspot.com/2007/08/bdd-isolation-integration.html>about this in my blog, with a controller spec example, testing just behaviour and not the code. Please, comment it and tell me what you think. I guess that with the new Story runner, this approach will be even better. Thanks in advance. -- http://papipo.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20070902/cda3e993/attachment.html
On 9/2/07, Rodrigo Alvarez Fern?ndez <papipo at gmail.com> wrote:> I have an article about this in my blog, with a controller spec example, > testing just behaviour and not the code. Please, comment it and tell me what > you think.I added comments on the blog: http://papipo.blogspot.com/2007/08/bdd-isolation-integration.html> I guess that with the new Story runner, this approach will be > even better.Be careful here - Story Runner is not really intended to solve these lower level problems. Not that it can''t be used for that, but it''s a lot more heavyweight and better suited for high level scenarios that exercise the code end to end (including the DB). Cheers, David> > Thanks in advance. > > -- > http://papipo.blogspot.com > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
Rodrigo Alvarez Fernández
2007-Sep-02 15:18 UTC
[rspec-users] testing behaviour or testing code?
On 9/2/07, David Chelimsky <dchelimsky at gmail.com> wrote:> > On 9/2/07, Rodrigo Alvarez Fern?ndez <papipo at gmail.com> wrote: > > I have an article about this in my blog, with a controller spec example, > > testing just behaviour and not the code. Please, comment it and tell me > what > > you think. > > I added comments on the blog: > http://papipo.blogspot.com/2007/08/bdd-isolation-integration.html > > > I guess that with the new Story runner, this approach will be > > even better. > > Be careful here - Story Runner is not really intended to solve these > lower level problems. Not that it can''t be used for that, but it''s a > lot more heavyweight and better suited for high level scenarios that > exercise the code end to end (including the DB).Yes, I meant as integration tests. Cheers,> David > > > > > Thanks in advance. > > > > -- > > http://papipo.blogspot.com > > > > _______________________________________________ > > rspec-users mailing list > > rspec-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/rspec-users > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >-- http://papipo.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20070902/fc52dc6f/attachment.html
On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote:> On 9/2/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote: > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > > On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote: > > > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote: > > > > > > describe Widget, "class" do > > > > > > it "should provide a list of widgets sorted alphabetically" do > > > > > > Widget.should_receive(:find).with(:order => "name ASC") > > > > > > Widget.find_alphabetically > > > > > > end > > > > > > end > > > > > > > > > > > > You''re correct that the refactoring requires you to change the > > > > > > object-level examples, and that is something that would be nice to > > > > > > avoid. But also keep in mind that in java and C# people refactor > > > > > > things like that all the time without batting an eye, because the > > > > > > tools make it a one-step activity. Refactoring is changing the design > > > > > > of your *system* without changing its behaviour. That doesn''t really > > > > > > fly all the way down to the object level 100% of the time. > > > > > > > > > > > > WDYT? > > > > > > > > > > I think that example is fine up until the model spec. The > > > > > find_alphabetically example should hit the db, imo. With the current > > > > > spec there''s no way to know whether find_alphabetically actually works > > > > > or not. You''re relying on knowledge of ActiveRecord here, trusting > > > > > that the arguments to find are correct. > > > > > > > > Au contrare! This all starts with an Integration Test. I didn''t post > > > > the code but I did mention it. > > > > > > > > > What I''ve found when I write specs is that I discover new layers of > > > > > services until eventually I get to a layer that actually does > > > > > something. When I get there, it''s important to have specs that > > > > > describe what it does, not how it does it. In the case of > > > > > find_alphabetically we care that it returns the items in alphabetical > > > > > order. Not that it makes a certain call to the db. > > > > > > > > I play this both ways and haven''t come to a preference, but I''m > > > > leaning towards blocking database access from the rspec examples and > > > > only allowing it my end to end tests (using Rails Integration Tests or > > > > - soon - RSpec''s new Story Runner). > > > > > > Now that I''ve had a chance to play with Story Runner, I want to > > > revisit this topic a bit. > > > > > > Let''s say in your example you wanted to refactor find_alphabetically > > > to use enumerable''s sort_by to do the sorting. > > > > > > def self.find_alphabetically > > > find(:all).sort_by {|w| w.name } > > > end > > > > > > Your model spec will fail, but your integration test will still pass. > > > > > > I''ve been thinking about this situation a lot over the last few > > > months. It''s been entirely theoretical because I haven''t had a suite > > > of integration tests ;) Most XP advocates lean heavily on unit tests > > > when doing refactoring. Mocking tends to get in the way of > > > refactoring though. In the example above, we rely on the integration > > > test to give us confidence while refactoring. In fact I would ignore > > > the unit test (model-level spec) altogether, and rewrite it when the > > > refactoring is complete. > > > > > > Here''s how I reconcile this with traditional XP unit testing. First > > > of all our integration tests are relatively light weight. In a web > > > app, a user story consists of making a request and verifying the > > > response. Authentication included, you''ll be making at most 3-5 HTTP > > > requests per test. This means that our integration tests still run in > > > just a few seconds. Integration tests in a Rails app are a completely > > > different beast from the integration tests in the Chrysler payroll app > > > that Beck, Jeffries, et al worked on. > > > > > > The second point of reconciliation is that mock objects and > > > refactoring are two distinct tools you use to design your code. When > > > I''m writing greenfield code I''ll use mocks to drive the design. When > > > I refactor though, I''m following known steps to improve the design of > > > my existing code. The vast majority of the time I will perform a > > > known refactoring, which means I know the steps and the resulting > > > design. In this situation I''ll ignore my model specs because they''ll > > > blow up, giving me no information other than I changed the design of > > > my code. I can use the integration tests to ensure that I haven''t > > > broken any behavior. At this point I would edit the model specs to > > > use the correct mock calls. > > > > > > As I mentioned, this has been something that''s been on my mind for a > > > while. I find mock objects to be very useful, but they seem to clash > > > with most of the existing TDD and XP literature. To summarize, here > > > are the points where I think they clash: > > > > > > * Classical TDD relies on unit tests for confidence in refactoring. > > > BDD relies on integration tests > > > * XP acceptance tests are customer tests, whereas RSpec User Stories > > > are programmer tests. They can serve a dual-purpose because you can > > > easily show them to a customer, but they''re programmer tests in the > > > sense that the programmer writes and is responsible for those > > > particular tests. > > > > > > In the end it boils down to getting stuff done. After a bit of > > > experimentation I''m thinking that the process of > > > 1. Write a user story > > > 2. Write detailed specs using mocks to drive design > > > 3. Refactor, using stories to ensure that expected behavior is > > > maintained, ignoring detailed specs > > > 4. Retrofit specs with correct mock expectations > > > > > > is a solid approach. I''d like others to weigh in with their thoughts. > > > > Hey Pat, > > > > I really appreciate that you''re thinking about and sharing this as its > > something that weighs on a lot of people''s minds and it''s clear that > > you have some understanding of the XP context in which all of this was > > born. > > > > That said, I see this quite a bit differently. > > > > I don''t think this has anything to do w/ TDD vs BDD. "Mock Objects" is > > not a BDD concept. It just feels that way because we talk more about > > interaction testing, but interaction testing predates BDD by some > > years. > > Hi David, > > Thanks so much for your thoughtful reply.Thanks for your thought provoking post!> You''re right, and I didn''t mean to suggest that mock objects were a > BDD concept at all. However it seems to me that BDDers embrace mock > objects as a very useful design tool, whereas classical TDDers would > use them sparsely, when a resource is expensive or difficult to use > directly.This is true to some extent, but the mock objects paper, which introduced the idea of mocks-as-design-tool (http://mockobjects.com/files/mockrolesnotobjects.pdf) was presented at OOPSLA 04, and the thinking that it came from had already been evolving.> For example, Beck talks about mocking a database in his > book, and that''s that. Astels demonstrates mocking the roll of a die. > He does briefly use mocks before he''s ready to implement the GUI part > of the app. > > Those are the two TDD books with which I''m most familiar. I''m sure a > lot has changed in the TDD community since then, and indeed you can > see that Astels'' mentality has changed somewhat. His "one assertion > per test" article [1] parses an address and then verifies it by > asserting the getters. His remake, "one expectation per example" [2] > is a bit different in that he passes a mocked builder in and uses that > to verify that the parsing code works, exposing no getters at all. > That to me signifies a fundamental shift in TDD thought. Instead of > thinking about objects in isolation and what services they provide, we > think of the services an object provides and how it interacts with > other objects and uses their services. > > I''m certain that it''s not a new way of thinking, but hopefully you can > see why I''d believe it''s probably not mainstream. > > There''s one other roadblock to my thinking, and it results from using > RSpec almost exclusively within Rails projects. I think it''s obvious > why you mock models when writing view and controller specs. However > less obvious to me is why mock associations in model specs, and I > think it has to do with the fact that AR couples business and > persistence logic.Absolutely! AR presents quite a testing conundrum. It''s clear from the testing approach supported by Rails directly that decoupling from the database is simply not of interested to DHH and company. Or at least it wasn''t early on. I see mock frameworks starting to appear in the Rails codebase, so perhaps this is changing. And I don''t mean to suggest that the Rails core team approach is the wrong approach. It simply does not align with what you''ve called "classical TDD thinking".> If we just had domain objects that never hit a database, then we might > initially mock interactions but then use concrete instances when we > later implemented those classes. When I think of Beck''s Money > example, or Martin Fowler''s video rental list in Refactoring, it seems > silly to me to use mocks in those cases.I think you''re right. Even if you''re going down what I view as the ideal mockists path - mocking everything that you need that doesn''t exist yet - I''ve often used mocks in process, but replaced them w/ the real deal once the real objects existed. Then you''re really using mocks for what they''re most powerful at: interface discovery. And then disposing of them once they''ve passed their usefulness in a given situation. In the case of AR, I keep them around to keep from hitting the DB.> Perhaps you might at the > very beginning, but you''d sub real objects in as you implemented them.D''oh! You ARE an ideal mockist!> We don''t do this with AR because they''re simply too heavy.Funny - I''m tempted to remove what I wrote above - but this is fun - responding as I go and then discovering that you already made the same point.> This culminates in another general idea I''ve had which is to mock > services in a lower layer, and use concrete instances for objects in > the same layer when possible. If we were to split AR into domain > objects and a data access layer, the domain objects would mock calls > to the data access layer but use concrete domain objects in the tests. > The unit tests remain fast and simple, and mocks no longer get in the > way of refactoring.Ay, there''s the rub. The problem we face is that AR promises huge productivity gains for the non TDD-er, and challenges the thinking of the die-hard TDD-er. I''ve gone back and forth about whether it''s OK to test validations like this: it "should validate_presence_of digits" do PhoneNumber.expects(:validates_presence_of).with(:digits) load "#{RAILS_ROOT}/app/models/phone_number.rb" end On the one hand, it looks immediately like we''re testing implementation. On the other, we''re not really - we''re mocking a call to an API. The confusion is that the API is represented in the same object as the one we''re testing (at least its class object). I haven''t really done this in anger yet, but I''m starting to think it''s the right way to go - especially now that we have Story Runner to cover things end to end. WDYT of this approach?> > Of course then you''re writing integration tests at a fairly low level > I guess, but that''s 100% acceptable to me in the interest of getting > stuff done rather than being dogmatic.+ 1 - in the end this is all about getting stuff done and knowing WHEN you''re done.> > The problem we experience with mocks relates to the fact that > > we''ve chosen to live in the beautiful, free, dynamically typed and > > POORLY TOOLED land of Ruby. When Ruby refactoring tools catch up with > > those of java and .NET, this pain will all go away. > > > > For example - if I''m in IntelliJ in a java project and I have a method > > like this: > > > > model.getName() > > > > and I''m using jmock (the old version), which uses Strings for method names: > > > > model.expects(once()).method("getName").will(returnValue("stub value")) > > > > and I do a Rename Method refactoring on getName(), IntelliJ will ask > > me if I want to change the strings it finds that match getName as well > > as the method invocations. > > > > In Ruby, we do this now w/ search and replace. Not quite as elegant. > > But under the hood, that''s all IntelliJ is doing. It just makes it > > feel like an integrated step of an automated refactoring. > > Agreed. I guess for me it''s easier to get the production code right > and then fix the tests after the fact. I''d hate to do all the work of > changing the production and test code and then find out it was > incorrect. Fixing tests after fixing the production code amounts to > the same work as doing it all in one step, because as you mentioned > it''s essentially a manual process. > > > re: Story Runner. The intent of Story Runner is exactly the same as > > tools like FIT, etc, that are typically found in the Acceptance > > Testing space in XP projects. In my experience using FitNesse, it was > > rare that a customer actually added new tests to a suite. If there > > were testing folks on board, they would do it (and they would likely > > be equipped to do it in Story Runner as well), but if not, then the > > FitNesse tests were at best the result of a collaborative session with > > the customer and, at worst, our (developers), interpretation of > > conversations we had had with the customer. > > > > I see Story Runner fitting in exactly like that in the short run. I > > can also see external DSLs emerging that let customers actually write > > the outputs that Story Runner should produce and run that through a > > process that writes what we''re writing now in Story Runner. But that''s > > probably some time off. > > > > I totally agree with your last statement that "it boils down to > > getting stuff done." And your approach seems to be the approach that I > > take, given the tools that we have. But I really think its about tools > > and not process. And I think that BDD is a lot more like what really > > experienced TDD''ers do out of the gate. We''re just choosing different > > words and structures to make it easier to communicate across roles on > > a team (customer, developer, tester, etc). > > So "ideally," who would write Story Runner stories? I put it in > quotes because I think it would differ greatly depending on the work > environment, what kind of level of interaction you have with the > customer, etc. Using TDD terms, would we consider SR stories to be > Customer or Developer tests? I gather from your insight that they''re > Customer tests.Yes - in my view they are Customer Tests - but bear in mind that that means "tests created by the person acting in the customer role." On a team of one, that might be the same person as the developer.> Finally I agree 100% on not focusing on process. I''m trying to figure > out the most effective process given the tools currently available, > and will be constantly changing it as more/better tools come along. > Although I suppose what I should really be spending my energy on is > building the tools that will make all our lives better ;)Patches always welcome! Cheers Pat. David> > Pat > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 9/2/2007 12:43 PM, David Chelimsky wrote:> The problem we face is that AR promises huge productivity gains for > the non TDD-er, and challenges the thinking of the die-hard TDD-er. > > I''ve gone back and forth about whether it''s OK to test validations like this: > > it "should validate_presence_of digits" do > PhoneNumber.expects(:validates_presence_of).with(:digits) > load "#{RAILS_ROOT}/app/models/phone_number.rb" > end > > On the one hand, it looks immediately like we''re testing > implementation. On the other, we''re not really - we''re mocking a call > to an API. The confusion is that the API is represented in the same > object as the one we''re testing (at least its class object). I haven''t > really done this in anger yet, but I''m starting to think it''s the > right way to go - especially now that we have Story Runner to cover > things end to end. WDYT of this approach?Personally, I don''t much like it. It feels too much like: it "should validate_presence_of digits" do my_model.line(7).should_read "validates_presence_of :digits" end I can write specs like that all day and ensure absolutely nothing about my code. I like to think of specs as a form of N-version programming where N=2 (or maybe N=3 now with Story Runner). By using a different vocabulary to express the specs than the actual code, we are more likely to think of the problem differently, and thus find places where the two versions of our code differ. Sometimes, it means we miswrote the spec; sometimes, it means we miswrote the code. But if all your spec does is guarantee that your code reads a certain way, you''ve done nothing but protect against accidental edits. And if you''re gonna go that way, why not go all the way: it "shouldn''t change unless I change the spec too" do MD5.new(my_model).should == "0xDEADBEEF0FFD2FFE4..." end I''d much rather see: it "should prevent me from entering anything but digits" do PhoneNumber.new("800-MATTRESS").should_not be_valid end And then, every time I find an edge case, I add another spec: it "should allow me to enter dashes" do PhoneNumber.new("800-555-1212").should be_valid end it "should only allow 10 digits" do PhoneNumber.new("800-555-12121212").should_not be_valid end etc. Jay Levitt
On 9/2/07, Jay Levitt <lists-rspec at shopwatch.org> wrote:> On 9/2/2007 12:43 PM, David Chelimsky wrote: > > The problem we face is that AR promises huge productivity gains for > > the non TDD-er, and challenges the thinking of the die-hard TDD-er. > > > > I''ve gone back and forth about whether it''s OK to test validations like this: > > > > it "should validate_presence_of digits" do > > PhoneNumber.expects(:validates_presence_of).with(:digits) > > load "#{RAILS_ROOT}/app/models/phone_number.rb" > > end > > > > On the one hand, it looks immediately like we''re testing > > implementation. On the other, we''re not really - we''re mocking a call > > to an API. The confusion is that the API is represented in the same > > object as the one we''re testing (at least its class object). I haven''t > > really done this in anger yet, but I''m starting to think it''s the > > right way to go - especially now that we have Story Runner to cover > > things end to end. WDYT of this approach? > > Personally, I don''t much like it. It feels too much like: > > it "should validate_presence_of digits" do > my_model.line(7).should_read "validates_presence_of :digits" > end > > I can write specs like that all day and ensure absolutely nothing about > my code. > > I like to think of specs as a form of N-version programming where N=2 > (or maybe N=3 now with Story Runner). By using a different vocabulary > to express the specs than the actual code, we are more likely to think > of the problem differently, and thus find places where the two versions > of our code differ. Sometimes, it means we miswrote the spec; > sometimes, it means we miswrote the code. > > But if all your spec does is guarantee that your code reads a certain > way, you''ve done nothing but protect against accidental edits. And if > you''re gonna go that way, why not go all the way: > > it "shouldn''t change unless I change the spec too" do > MD5.new(my_model).should == "0xDEADBEEF0FFD2FFE4..." > end > > I''d much rather see: > > it "should prevent me from entering anything but digits" do > PhoneNumber.new("800-MATTRESS").should_not be_valid > end > > And then, every time I find an edge case, I add another spec: > > it "should allow me to enter dashes" do > PhoneNumber.new("800-555-1212").should be_valid > end > > it "should only allow 10 digits" do > PhoneNumber.new("800-555-12121212").should_not be_valid > endA couple of things to consider: There''s a very useful guideline in TDD that says "test YOUR code, not everyone elses." The validation library we''re testing here is ActiveRecord''s. It''s already tested (we hope!). Also - there''s a difference between the behaviour of a system and the behaviour of an object. The system''s job is to validate that the phone number is all digits. So it makes sense to have examples like that in high level examples using Story Runner, rails integration tests, or an in-browser suite like Selenium or Watir. This model object''s job is to make sure the input gets validated, not to actually validate it. If the model made a more OO-feeling call out to a message library - something like this: class PhoneNumber def validators @validators ||= [] end def add_validator (validator) validators << validator end def validate(input) validators.each {|v| v.validate (input)} end end Then submitting mock validators via add_validator and setting mock expectations that they get called would be totally par for the course. In AR, the validators are added declaratively. This is a Rails design decision that we have to either live with or write other code around. Choosing to live with it, it seems to me that mocking the call to validates_presence_of :digits is no different than mocking validate on an injected validator. That all make sense?> > etc. > > > Jay Levitt > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 9/2/07, David Chelimsky <dchelimsky at gmail.com> wrote:> On 9/2/07, Jay Levitt <lists-rspec at shopwatch.org> wrote: > > On 9/2/2007 12:43 PM, David Chelimsky wrote: > > > The problem we face is that AR promises huge productivity gains for > > > the non TDD-er, and challenges the thinking of the die-hard TDD-er. > > > > > > I''ve gone back and forth about whether it''s OK to test validations like this: > > > > > > it "should validate_presence_of digits" do > > > PhoneNumber.expects(:validates_presence_of).with(:digits) > > > load "#{RAILS_ROOT}/app/models/phone_number.rb" > > > end > > > > > > On the one hand, it looks immediately like we''re testing > > > implementation. On the other, we''re not really - we''re mocking a call > > > to an API. The confusion is that the API is represented in the same > > > object as the one we''re testing (at least its class object). I haven''t > > > really done this in anger yet, but I''m starting to think it''s the > > > right way to go - especially now that we have Story Runner to cover > > > things end to end. WDYT of this approach? > > > > Personally, I don''t much like it. It feels too much like: > > > > it "should validate_presence_of digits" do > > my_model.line(7).should_read "validates_presence_of :digits" > > end > > > > I can write specs like that all day and ensure absolutely nothing about > > my code. > > > > I like to think of specs as a form of N-version programming where N=2 > > (or maybe N=3 now with Story Runner). By using a different vocabulary > > to express the specs than the actual code, we are more likely to think > > of the problem differently, and thus find places where the two versions > > of our code differ. Sometimes, it means we miswrote the spec; > > sometimes, it means we miswrote the code. > > > > But if all your spec does is guarantee that your code reads a certain > > way, you''ve done nothing but protect against accidental edits. And if > > you''re gonna go that way, why not go all the way: > > > > it "shouldn''t change unless I change the spec too" do > > MD5.new(my_model).should == "0xDEADBEEF0FFD2FFE4..." > > end > > > > I''d much rather see: > > > > it "should prevent me from entering anything but digits" do > > PhoneNumber.new("800-MATTRESS").should_not be_valid > > end > > > > And then, every time I find an edge case, I add another spec: > > > > it "should allow me to enter dashes" do > > PhoneNumber.new("800-555-1212").should be_valid > > end > > > > it "should only allow 10 digits" do > > PhoneNumber.new("800-555-12121212").should_not be_valid > > end > > A couple of things to consider: > > There''s a very useful guideline in TDD that says "test YOUR code, not > everyone elses." The validation library we''re testing here is > ActiveRecord''s. It''s already tested (we hope!). > > Also - there''s a difference between the behaviour of a system and the > behaviour of an object. The system''s job is to validate that the phone > number is all digits. So it makes sense to have examples like that in > high level examples using Story Runner, rails integration tests, or an > in-browser suite like Selenium or Watir. > > This model object''s job is to make sure the input gets validated, not > to actually validate it. If the model made a more OO-feeling call out > to a message library - something like this: > > class PhoneNumber > def validators > @validators ||= [] > end > > def add_validator (validator) > validators << validator > end > > def validate(input) > validators.each {|v| v.validate (input)} > end > end > > Then submitting mock validators via add_validator and setting mock > expectations that they get called would be totally par for the course. > > In AR, the validators are added declaratively. This is a Rails design > decision that we have to either live with or write other code around. > Choosing to live with it, it seems to me that mocking the call to > validates_presence_of :digits is no different than mocking validate on > an injected validator. > > That all make sense?There''s nothing technically *wrong* with it, and logically it holds weight. It just doesn''t feel right. Your key point is that we''re making an API call, which I agree with. We also agree that AR probably does too much, and I think this is a situation where we should go with the flow. We call my_record.valid? and end up with my_record.errors if it''s not valid. An AR object is in fact responsible for its own validation (even if you feel it''s too much responsibility). It makes sense to specify the object''s behavior in the same way. Personally I can''t find a strong argument either way. I''m sure it''s a matter of taste here. I would prefer to look at a spec and get as much info on how to use an object as possible. In that case, creating an object, calling valid?, and inspecting errors is probably more helpful. But after giving this a lot of thought, I''m not sure it warrants a ton of thought :) Pat
> There''s a very useful guideline in TDD that says "test YOUR code, not > everyone elses." The validation library we''re testing here is > ActiveRecord''s. It''s already tested (we hope!).Personally, I don''t have the courage to assume Rails code is always working. I know from experience it doesn''t always work although it is quite solid in general. The Rails code has been tested but not in conjunction with my particular apps. I also want to test my assumptions of how the Rails API works, maybe it doesn''t work as I think. Having tests/specs that cover Rails interaction with my app, which higher level tests of course naturally do (system/integration tests), gives me much more courage to upgrade Rails as well. Peter
On 9/3/07, Peter Marklund <peter_marklund at fastmail.fm> wrote:> > There''s a very useful guideline in TDD that says "test YOUR code, not > > everyone elses." The validation library we''re testing here is > > ActiveRecord''s. It''s already tested (we hope!). > > Personally, I don''t have the courage to assume Rails code is always > working. I know from experience it doesn''t always work although it is > quite solid in general. The Rails code has been tested but not in > conjunction with my particular apps. I also want to test my > assumptions of how the Rails API works, maybe it doesn''t work as I > think. Having tests/specs that cover Rails interaction with my app, > which higher level tests of course naturally do (system/integration > tests), gives me much more courage to upgrade Rails as well. > > PeterThat''s a good point. Having specs in place that demonstrate how you expect the code to behave will alert when a newer version of Rails behaves a bit differently. Granted, in the validates_presence_of example that probably won''t be an issue, but you get the idea. I think it was Kevin Clark who said it''s a good idea to learn Ruby by writing specs...then whenever you upgrade Ruby or install new libraries, your spec suite will make it clear when your assumptions about the language need to change. Pat
On Sep 3, 2007, at 3:48 AM, Pat Maddox wrote:> On 9/3/07, Peter Marklund <peter_marklund at fastmail.fm> wrote: >>> There''s a very useful guideline in TDD that says "test YOUR code, >>> not >>> everyone elses." The validation library we''re testing here is >>> ActiveRecord''s. It''s already tested (we hope!). >> >> Personally, I don''t have the courage to assume Rails code is always >> working. I know from experience it doesn''t always work although it is >> quite solid in general.I think there are also a lot of leaky abstractions when it comes to rails code. It''s tempting to think that rails is just doing a bunch of stuff automatically for you (and it is) - but there are edge cases, and unless you know *exactly* what rails is doing under the cover, testing the behaviour seems to be a good idea. I''ve already run into one bug in the last week (when doing something rather dynamic in a model class) which I wouldn''t have expected. Scott
On 9/3/07, Peter Marklund <peter_marklund at fastmail.fm> wrote:> > There''s a very useful guideline in TDD that says "test YOUR code, not > > everyone elses." The validation library we''re testing here is > > ActiveRecord''s. It''s already tested (we hope!). > > Personally, I don''t have the courage to assume Rails code is always > working.The school of thought that says "test your code" addresses this issue as well - you can have examples that specifically test assumptions in an API - but then they should be separated from your other examples (as they are not testing your code). Check out JUnit Recipes by J.B. Rainsberger.> I know from experience it doesn''t always work although it is > quite solid in general. The Rails code has been tested but not in > conjunction with my particular apps. I also want to test my > assumptions of how the Rails API works, maybe it doesn''t work as I > think.Again - JB calls these "learning tests."> Having tests/specs that cover Rails interaction with my app, > which higher level tests of course naturally do (system/integration > tests), gives me much more courage to upgrade Rails as well.Agreed. And Story Runner is the perfect place for these. Cheers, David> > Peter > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 9/2/2007 11:49 PM, David Chelimsky wrote:> On 9/2/07, Jay Levitt <lists-rspec at shopwatch.org> wrote: >> On 9/2/2007 12:43 PM, David Chelimsky wrote: >>> The problem we face is that AR promises huge productivity gains for >>> the non TDD-er, and challenges the thinking of the die-hard TDD-er. >>> >>> I''ve gone back and forth about whether it''s OK to test validations like this: >>> >>> it "should validate_presence_of digits" do >>> PhoneNumber.expects(:validates_presence_of).with(:digits) >>> load "#{RAILS_ROOT}/app/models/phone_number.rb" >>> end >>> >>> On the one hand, it looks immediately like we''re testing >>> implementation. On the other, we''re not really - we''re mocking a call >>> to an API. The confusion is that the API is represented in the same >>> object as the one we''re testing (at least its class object). I haven''t >>> really done this in anger yet, but I''m starting to think it''s the >>> right way to go - especially now that we have Story Runner to cover >>> things end to end. WDYT of this approach? >> Personally, I don''t much like it. It feels too much like: >> >> it "should validate_presence_of digits" do >> my_model.line(7).should_read "validates_presence_of :digits" >> end > A couple of things to consider: > > There''s a very useful guideline in TDD that says "test YOUR code, not > everyone elses." The validation library we''re testing here is > ActiveRecord''s. It''s already tested (we hope!).Right... and I''m not testing that ActiveRecord''s validation works. I''m testing that my model works as I expect it to work. For instance, in your example, you just verify that you call validates_presence_of with fieldname :digits. You''re not verifying that that''s the right thing to do, or that it behaves the way you expect it to. Also, I think this conflicts with "test behavior, not implementation". All I care about is the behavior of the model; I don''t care if it calls validates_presence_of, or if it calls acts_as_phone_number.> Also - there''s a difference between the behaviour of a system and the > behaviour of an object. The system''s job is to validate that the phone > number is all digits. So it makes sense to have examples like that in > high level examples using Story Runner, rails integration tests, or an > in-browser suite like Selenium or Watir.Ah, but (as Pat pointed out) in Rails, validations are, in fact, the job of the model. They may be done with validates_* "declarations", or with custom code, or with plugins.> This model object''s job is to make sure the input gets validated, not > to actually validate it. If the model made a more OO-feeling call out > to a message library - something like this: > > class PhoneNumber > def validators > @validators ||= [] > end > > def add_validator (validator) > validators << validator > end > > def validate(input) > validators.each {|v| v.validate (input)} > end > end > > Then submitting mock validators via add_validator and setting mock > expectations that they get called would be totally par for the course.Yeah, and I guess I still haven''t swallowed that part of mocking - because, again, it''s brittle and tied to implementation. I have no problem mocking out ActiveRecord, because that''s a major part of any Rails app and it''s a given that you''ll be using it in a certain way. Ditto for any other major library. But validations are so simplistic that you might write a given validation in five different ways, and specifying -which- of those five ways the code should use just feels wrong. And even for AR, as someone pointed out - what if I want to use .new or .build instead of .create, or .update_attributes instead of the setter function? The ideal answer for that is to build a more sophisticated AR mock that lets you write expectations that work in any of those cases. I want to know that User.register.should do_something_that_creates_a_new_record, not that it explicitly called .create. And interestingly, in the case of .update_attributes vs. direct assignment, it seems to me that the proper way to "test the behavior, not the implementation" is to check the value of the field after the fact - which of course apparently conflicts with "test behavior, not state". But, when the behavior IS to set a certain state, I feel like it''s OK. Jay
El 7/9/2007, a las 5:36, Jay Levitt escribi?:>> There''s a very useful guideline in TDD that says "test YOUR code, not >> everyone elses." The validation library we''re testing here is >> ActiveRecord''s. It''s already tested (we hope!). > > Right... and I''m not testing that ActiveRecord''s validation works. > I''m > testing that my model works as I expect it to work. > > For instance, in your example, you just verify that you call > validates_presence_of with fieldname :digits. You''re not verifying > that > that''s the right thing to do, or that it behaves the way you expect > it to. > > Also, I think this conflicts with "test behavior, not implementation". > All I care about is the behavior of the model; I don''t care if it > calls > validates_presence_of, or if it calls acts_as_phone_number.Very true that you shouldn''t be testing ActiveRecord''s validation (Rails'' own unit tests are there for that). But if you want to do truly *driven* BDD then you will have to test something; in other words, *before* you go ahead and add this line to your model: validates_presence_of :foo You need to write a failing spec for it first. Otherwise, why would you write it? Doing BDD in its purest form you shouldn''t be writing *any* line of code without your specs driving it. This means the familiar "write failing spec, write code, confirm working spec" cycle. So the question is, what is the best kind of spec to write to *drive* the writing of your "validates_presence_of" lines? For some validations it''s quite easy. For others it is less straightforward. There are probably multiple valid (or valid-ish) answers, but it''s sometimes difficult to know which one is best. Cheers, Wincent
On Sep 7, 2007, at 8:24 AM, Wincent Colaiuta wrote:> El 7/9/2007, a las 5:36, Jay Levitt escribi?: > >>> There''s a very useful guideline in TDD that says "test YOUR code, >>> not >>> everyone elses." The validation library we''re testing here is >>> ActiveRecord''s. It''s already tested (we hope!). >> >> Right... and I''m not testing that ActiveRecord''s validation works. >> I''m >> testing that my model works as I expect it to work. >> >> For instance, in your example, you just verify that you call >> validates_presence_of with fieldname :digits. You''re not verifying >> that >> that''s the right thing to do, or that it behaves the way you expect >> it to. >> >> Also, I think this conflicts with "test behavior, not >> implementation". >> All I care about is the behavior of the model; I don''t care if it >> calls >> validates_presence_of, or if it calls acts_as_phone_number. > > Very true that you shouldn''t be testing ActiveRecord''s validation > (Rails'' own unit tests are there for that). > > But if you want to do truly *driven* BDD then you will have to test > something; in other words, *before* you go ahead and add this line to > your model: > > validates_presence_of :foo > > You need to write a failing spec for it first. Otherwise, why would > you write it? Doing BDD in its purest form you shouldn''t be writing > *any* line of code without your specs driving it. This means the > familiar "write failing spec, write code, confirm working spec" cycle. > > So the question is, what is the best kind of spec to write to *drive* > the writing of your "validates_presence_of" lines? For some > validations it''s quite easy. For others it is less straightforward. > There are probably multiple valid (or valid-ish) answers, but it''s > sometimes difficult to know which one is best. >I''ve kind of adopted this plain-vanilla approach: http://snippets.dzone.com/posts/show/4508 You''ll see I''m a little fat, especially with my polymorphic association. Maybe they can be improved, but it''s been the right balance of testing before I write code for me--I gain confidence and momentum, and my later iterations are constrained enough to keep me from stepping out of bounds. I''m sure my ideas will evolve over time.> Cheers, > Wincent > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users
Wincent Colaiuta wrote:> El 7/9/2007, a las 5:36, Jay Levitt escribi?: > > Very true that you shouldn''t be testing ActiveRecord''s validation > (Rails'' own unit tests are there for that). > > But if you want to do truly *driven* BDD then you will have to test > something; in other words, *before* you go ahead and add this line to > your model: > > validates_presence_of :foo > > You need to write a failing spec for it first. Otherwise, why would > you write it? Doing BDD in its purest form you shouldn''t be writing > *any* line of code without your specs driving it. This means the > familiar "write failing spec, write code, confirm working spec" cycle. > > So the question is, what is the best kind of spec to write to *drive* > the writing of your "validates_presence_of" lines? For some > validations it''s quite easy. For others it is less straightforward. > There are probably multiple valid (or valid-ish) answers, but it''s > sometimes difficult to know which one is best.Well put! To me, if the spec I write is: Model.expects(:validates_presence_of).with(:digits) then I haven''t written a spec at all - I''ve written the code I plan to write, and spelled it differently! The English version of that spec is: Model - should call validates_presence_of with parameter :digits That''s just specifying what a line of my code should *say*, not how Model should *behave*. I really like Wincent''s approach - test that valid input yields a valid response and that invalid input yields an invalid response. Jay