OK, so i''ve played a bit with mocks and mock_models in controller and view tests and i have a question. Is this statement really correct: "We highly recommend that you exploit the mock framework here rather than providing real model objects in order to keep the view specs isolated from changes to your models." (http://rspec.rubyforge.org/documentation/rails/writing/views.html in section ''assigns'') I ask because this wonderful declaration passes in my view test even though the model Project doesn''t have the field ''synopsis'' yet: @project1 = mock_model(Project) @project1.stub!(:id).and_return(1) @project1.stub!(:name).and_return("My first project") @project1.stub!(:synopsis).and_return("This is a fantastic new project") assigns[:projects] = [@project1] it "should show the list of projects with name and synopsis" do render "/projects/index.rhtml" response.should have_tag(''div#project_1_name'', ''My first project'') response.should have_tag(''div#project_1_synopsis'', ''This is a fantastic new project'') response.should have_tag(''div#project_2_name'', ''My second project'') response.should have_tag(''div#project_2_synopsis'', ''This is another fantastic project'') response.should have_tag(''a'', ''This is another fantastic project'') end This is handy and keeps the view test isolated from changes to your models, but is that really the point? What if someone later changes the model and updates the model tests so that they pass but do not realize that they''ve then broken the view? I''m sure i am simply missing the point here or not taking into account integration testing that would i expect aim to catch such changes, but somehow i want my view tests to tell me that the views are no longer going to behave as expected Thanks Andy
On Dec 6, 2007 10:56 AM, Andy Goundry <andy at adveho.net> wrote:> OK, so i''ve played a bit with mocks and mock_models in controller and > view tests and i have a question. Is this statement really correct: > > "We highly recommend that you exploit the mock framework here rather > than providing real model objects in order to keep the view specs > isolated from changes to your models." > (http://rspec.rubyforge.org/documentation/rails/writing/views.html in > section ''assigns'') > > I ask because this wonderful declaration passes in my view test even > though the model Project doesn''t have the field ''synopsis'' yet: > > @project1 = mock_model(Project) > @project1.stub!(:id).and_return(1) > @project1.stub!(:name).and_return("My first project") > @project1.stub!(:synopsis).and_return("This is a fantastic new project") > assigns[:projects] = [@project1] > > it "should show the list of projects with name and synopsis" do > render "/projects/index.rhtml" > response.should have_tag(''div#project_1_name'', ''My first project'') > response.should have_tag(''div#project_1_synopsis'', ''This is a > fantastic new project'') > response.should have_tag(''div#project_2_name'', ''My second project'') > response.should have_tag(''div#project_2_synopsis'', ''This is > another fantastic project'') > response.should have_tag(''a'', ''This is another fantastic project'') > end > > This is handy and keeps the view test isolated from changes to your > models, but is that really the point?It depends on what you value. If you are doing BDD, then you are running all of your examples between every change. If you are doing that, you value fast-running examples.>What if someone later changes > the model and updates the model tests so that they pass but do not > realize that they''ve then broken the view? > > I''m sure i am simply missing the point here or not taking into account > integration testing that would i expect aim to catch such changes, but > somehow i want my view tests to tell me that the views are no longer > going to behave as expectedThis is matter of mindset. In my view, the view is still behaving as expected, it''s just that the expectation is wrong. What''s not behaving correctly is the application, which, as you point out, we would learn from stories (integration tests). HTH, David> > Thanks > > Andy > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On 6 Dec 2007, at 16:56, Andy Goundry wrote:> This is handy and keeps the view test isolated from changes to your > models, but is that really the point?Yes, that''s part of the whole idea of using mocks. (Similarly, two interacting models will be isolated from each other''s implementation.)> What if someone later changes > the model and updates the model tests so that they pass but do not > realize that they''ve then broken the view?This is an engineering problem. For example, perhaps you could have one helper that''s used to manufacture mock Projects, and use it in the view specs and the model specs; changes to the attributes of Project (and the corresponding changes to its spec) will then require that helper to be updated so that the model specs still pass, and the view spec will fail accordingly.> somehow i want my view tests to tell me that the views are no longer > going to behave as expectedYour view specs tell you that your views will behave as expected _under the assumption that_ your model objects behave as expected. That assumption needs to be checked elsewhere (in the model spec). Cheers, -Tom
Hi!> > This is handy and keeps the view test isolated from changes to your > > models, but is that really the point?I was very confused first as well. It didn''t make any point to me and I''m not using it at all. As far as I know, I take it as an optional tool to go nuts with views when needed. I will use it when some stuff in view becomes super important to be there. However as an one-band project I haven''t feel this need yet. Second thing is about how you like to develop your stuff. As far as I know David start with Story -> Views -> controller -> model. I prefer to go this way: Story -> model/controller -> views. So now you might guess why specing views are nice thing when you go David''s way up-to-down. Anyhow, mocking in controllers (and in views) makes much more sense now with story runner in the big picture. General stuff ''does it work at all'' goes to story runner and specific low level stuff goes to spec. So it''s up to you if you care about low level stuff in views. One thing what I still don''t like so much is that rspec "force" you to develop things super vertically or otherwise your mocks will be out of sync very quickly. Correct me if I''m wrong !!! Oki, Priit PS. somehow autotest does not pick up stories to run. I haven''t yet investigate it why.
On Dec 7, 2007 8:30 PM, Priit Tamboom <priit at mx.ee> wrote:> Hi! > > > > This is handy and keeps the view test isolated from changes to your > > > models, but is that really the point? > > I was very confused first as well. It didn''t make any point to me and > I''m not using it at all. As far as I know, I take it as an optional > tool to go nuts with views when needed. I will use it when some stuff > in view becomes super important to be there. However as an one-band > project I haven''t feel this need yet. > > Second thing is about how you like to develop your stuff. As far as I > know David start with Story -> Views -> controller -> model. I prefer > to go this way: Story -> model/controller -> views. So now you might > guess why specing views are nice thing when you go David''s way > up-to-down. > > Anyhow, mocking in controllers (and in views) makes much more sense > now with story runner in the big picture. General stuff ''does it work > at all'' goes to story runner and specific low level stuff goes to > spec. So it''s up to you if you care about low level stuff in views. > > One thing what I still don''t like so much is that rspec "force" you to > develop things super vertically or otherwise your mocks will be out of > sync very quickly. Correct me if I''m wrong !!!RSpec doesn''t force you to anything at all. However, the agile approach tends to be vertical slices in short iterations. Working outside-in, using mocks, etc all ties in with that thinking. But rspec is certainly not going to throw errors at you if you decide to write your entire model layer first.> > Oki, > Priit > > PS. somehow autotest does not pick up stories to run. I haven''t yet > investigate it why.This is by design. Autotest supports the TDD process - rapid iterations of red/green/refactor. Having them run your stories too would slow things down considerably.> > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
Thanks for all the feedback. Personally, i am working outside in, from views to models, so mocking does have its place. After lots of trialing, I am confident now that a Factory class can satisfy my need for using mocks and real models in different places. I define the characteristics of an intended model in the factory and ask it to return either a mock_model or a real one depending on my specific need. Once I''ve used in anger, I''ll mail details of my implementation and experiences. Although I have played with story runner, I have yet to decide how I''ll fit that into my development process. In fact, I love story runner, it''s just I am not sure how much time I can afford to assign to tests on client work whilst I am still getting up to speed. As a note, I recently wrote a functional spec document for a client using the Given, When, Then approach for each use case, and the client loved it! It is a very clear way if writing specs. Andy On 8 Dec 2007, at 02:40, "David Chelimsky" <dchelimsky at gmail.com> wrote:> On Dec 7, 2007 8:30 PM, Priit Tamboom <priit at mx.ee> wrote: >> Hi! >> >>>> This is handy and keeps the view test isolated from changes to your >>>> models, but is that really the point? >> >> I was very confused first as well. It didn''t make any point to me and >> I''m not using it at all. As far as I know, I take it as an optional >> tool to go nuts with views when needed. I will use it when some stuff >> in view becomes super important to be there. However as an one-band >> project I haven''t feel this need yet. >> >> Second thing is about how you like to develop your stuff. As far as I >> know David start with Story -> Views -> controller -> model. I prefer >> to go this way: Story -> model/controller -> views. So now you might >> guess why specing views are nice thing when you go David''s way >> up-to-down. >> >> Anyhow, mocking in controllers (and in views) makes much more sense >> now with story runner in the big picture. General stuff ''does it work >> at all'' goes to story runner and specific low level stuff goes to >> spec. So it''s up to you if you care about low level stuff in views. >> >> One thing what I still don''t like so much is that rspec "force" you >> to >> develop things super vertically or otherwise your mocks will be out >> of >> sync very quickly. Correct me if I''m wrong !!! > > RSpec doesn''t force you to anything at all. However, the agile > approach tends to be vertical slices in short iterations. Working > outside-in, using mocks, etc all ties in with that thinking. > > But rspec is certainly not going to throw errors at you if you decide > to write your entire model layer first. > >> >> Oki, >> Priit >> >> PS. somehow autotest does not pick up stories to run. I haven''t yet >> investigate it why. > > This is by design. Autotest supports the TDD process - rapid > iterations of red/green/refactor. Having them run your stories too > would slow things down considerably. > >> >> _______________________________________________ >> rspec-users mailing list >> rspec-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/rspec-users >> > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users
I prefer the mantra "mock roles, not objects", in other words, mock things that have behaviour (services, components, resources, whatever your preferred term is) rather than stubbing out domain objects themselves. If you have to mock domain objects it''s usually a smell that your domain implementation is too tightly coupled to some infrastructure. The rails community is the first place I''ve encountered stubbing domain objects as a norm (and in fact as an encouraged "best practice"). It seems to be a consequence of how tightly coupled the model classes are to the database. I don''t use rails in anger, and in the other technologies and frameworks I use (in Java, .Net or Ruby) I never mock the domain model. It seems unwieldy and overly verbose to me to have to stub properties on a model class. I usually use a builder pattern: cheese = CheeseBuilder.to_cheese # with suitable default values for testing The builder has lots of methods that start "with_" and return the builder instance, so you can train-wreck the properties: another_cheese = CheeseBuilder.with_type (:edam).with_flavour(:mild).to_cheese toastie = ToastieBuilder.with_cheese(cheese).to_toastie # composing domain objects In applications with a database, I then have a very thin suite of low level integration tests that use "real" domain objects, wired up to a real database to verify the behaviour at that level. Of course this is both slow and highly dependent on infrastructure, so I am careful to keep the integration examples separate from the interaction-based ones that I can isolate. Maybe it''s the way rails encourages you to write apps - where it''s mostly about getting data from a screen to a database and back again - that people are more tolerant of such highly-coupled tests. For myself, I use builders for domain objects and mocks for service dependencies whenever I can, and have a minimal suite of integration tests that require everything to be wired together. Using fixtures and database setup for regular behavioural examples smacks of data-oriented programming to me, and stubbing domain objects feels like solving the wrong problem. Cheers, Dan On Dec 8, 2007 9:20 AM, Andy Goundry <andy at adveho.net> wrote:> Thanks for all the feedback. Personally, i am working outside in, from > views to models, so mocking does have its place. After lots of > trialing, I am confident now that a Factory class can satisfy my need > for using mocks and real models in different places. I define the > characteristics of an intended model in the factory and ask it to > return either a mock_model or a real one depending on my specific > need. Once I''ve used in anger, I''ll mail details of my implementation > and experiences. > > Although I have played with story runner, I have yet to decide how > I''ll fit that into my development process. In fact, I love story > runner, it''s just I am not sure how much time I can afford to assign > to tests on client work whilst I am still getting up to speed. > > As a note, I recently wrote a functional spec document for a client > using the Given, When, Then approach for each use case, and the client > loved it! It is a very clear way if writing specs. > > Andy > > On 8 Dec 2007, at 02:40, "David Chelimsky" <dchelimsky at gmail.com> wrote: > > > On Dec 7, 2007 8:30 PM, Priit Tamboom <priit at mx.ee> wrote: > >> Hi! > >> > >>>> This is handy and keeps the view test isolated from changes to your > >>>> models, but is that really the point? > >> > >> I was very confused first as well. It didn''t make any point to me and > >> I''m not using it at all. As far as I know, I take it as an optional > >> tool to go nuts with views when needed. I will use it when some stuff > >> in view becomes super important to be there. However as an one-band > >> project I haven''t feel this need yet. > >> > >> Second thing is about how you like to develop your stuff. As far as I > >> know David start with Story -> Views -> controller -> model. I prefer > >> to go this way: Story -> model/controller -> views. So now you might > >> guess why specing views are nice thing when you go David''s way > >> up-to-down. > >> > >> Anyhow, mocking in controllers (and in views) makes much more sense > >> now with story runner in the big picture. General stuff ''does it work > >> at all'' goes to story runner and specific low level stuff goes to > >> spec. So it''s up to you if you care about low level stuff in views. > >> > >> One thing what I still don''t like so much is that rspec "force" you > >> to > >> develop things super vertically or otherwise your mocks will be out > >> of > >> sync very quickly. Correct me if I''m wrong !!! > > > > RSpec doesn''t force you to anything at all. However, the agile > > approach tends to be vertical slices in short iterations. Working > > outside-in, using mocks, etc all ties in with that thinking. > > > > But rspec is certainly not going to throw errors at you if you decide > > to write your entire model layer first. > > > >> > >> Oki, > >> Priit > >> > >> PS. somehow autotest does not pick up stories to run. I haven''t yet > >> investigate it why. > > > > This is by design. Autotest supports the TDD process - rapid > > iterations of red/green/refactor. Having them run your stories too > > would slow things down considerably. > > > >> > >> _______________________________________________ > >> rspec-users mailing list > >> rspec-users at rubyforge.org > >> http://rubyforge.org/mailman/listinfo/rspec-users > >> > > _______________________________________________ > > rspec-users mailing list > > rspec-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/rspec-users > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20071208/1fc89c90/attachment.html
On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote:> I prefer the mantra "mock roles, not objects", in other words, mock things > that have behaviour (services, components, resources, whatever your > preferred term is) rather than stubbing out domain objects themselves. If > you have to mock domain objects it''s usually a smell that your domain > implementation is too tightly coupled to some infrastructure.Assuming you could easily write Rails specs using the real domain objects, but not hit the database, would you "never" mock domain objects (where "never" means you deviate only in extraordinary circumstances)? I''m mostly curious in the interaction between controller and model...if you use real models, then changes to the model code could very well lead to failing controller specs, even though the controller''s logic is still correct. What is your opinion on isolating tests? Do you try to test each class in complete isolation, mocking its collaborators? When you use interaction-based tests, do you always leave those in place, or do you substitute real objects as you implement them (and if so, do your changes move to a more state-based style)? How do you approach specifying different layers within an app? Do you use real implementations if there are lightweight ones available, or do you mock everything out? I realize that''s a ton of questions...I''d be very grateful (and impressed!) if you took the time to answer any of them. Also I''d love to hear input from other people as well. Pat
Pat. I''m going to reply by promising to reply. You''ve asked a ton of really useful and insightful questions. I can''t do them justice without sitting down and spending a bunch of time thinking about them. I''m going to be off the radar for a bit over Christmas - I''ve had an insane year and I''ve promised myself (and my wife) some quiet time. Your questions have a little star next to them in my gmail inbox, which means at the very least they''ll be ignored less than the other mail I have to respond to :) The one sentence response, though, is that I honestly don''t know (which is why I need to think about it). I can tell you I *think* I isolate services from their dependencies using mocks, I *think* I never stub domain objects (I definitely never mock them, but stubbing them is different), I can''t say how I test layers because I think we have a different definition of layers. The reason I''m being being so vague is that I usually specify behaviour from the outside in, starting with the "outermost" objects (the ones that appear in the scenario steps) and working inwards as I implement each bit of behaviour. That way I discover service dependencies that I introduce as mocks, and other domain objects that become, well, domain objects. Then there are other little classes that fall out of the mix that seem to make sense as I go along. I don''t usually start out with much more of a strategy than that. I can''t speak as a tester because I''m not one, so I can''t really give you a sensible answer for how isolated my tests are. I simply don''t have tests at that level. At an acceptance level my scenarios only ever use real objects wired together doing full end-to-end testing. Sometimes I''ll swap in a lighter-weight implementation (say using an in-memory database rather than a remote one, or an in-thread Java web container like Jetty rather than firing up Tomcat), but all the wiring is still the same (say JDBC or HTTP-over-the-wire). I''m still not entirely sure how this maps to Rails, but in Java MVC web apps I would *want* the controller examples failing if the model''s behaviour changed in a particular way, so I can''t think of a reason why I would want fake domain objects. Like I said, I''ll have a proper think and get back to you. Cheers, Dan On Dec 15, 2007 7:17 AM, Pat Maddox <pergesu at gmail.com> wrote:> On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote: > > I prefer the mantra "mock roles, not objects", in other words, mock > things > > that have behaviour (services, components, resources, whatever your > > preferred term is) rather than stubbing out domain objects themselves. > If > > you have to mock domain objects it''s usually a smell that your domain > > implementation is too tightly coupled to some infrastructure. > > Assuming you could easily write Rails specs using the real domain > objects, but not hit the database, would you "never" mock domain > objects (where "never" means you deviate only in extraordinary > circumstances)? I''m mostly curious in the interaction between > controller and model...if you use real models, then changes to the > model code could very well lead to failing controller specs, even > though the controller''s logic is still correct. > > What is your opinion on isolating tests? Do you try to test each > class in complete isolation, mocking its collaborators? When you use > interaction-based tests, do you always leave those in place, or do you > substitute real objects as you implement them (and if so, do your > changes move to a more state-based style)? How do you approach > specifying different layers within an app? Do you use real > implementations if there are lightweight ones available, or do you > mock everything out? > > I realize that''s a ton of questions...I''d be very grateful (and > impressed!) if you took the time to answer any of them. Also I''d love > to hear input from other people as well. > > Pat >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20071216/38a83a5f/attachment.html
Coming to this thread a bit late: I think I''m pretty close to Dan, in practice: I''m not a big fan of fine-grained isolation in writing your tests. The practice seems to me like it would just bug you down. When I''m writing a behavior for a particular thing, such as a controller, I don''t want to have to worry about the precise messages that are passed to its collaborators. I try to think in a fairly "black box" manner about it: Presupposing that there''s a given document in a database table, when I make an HTTP request that''s looking for that document, I should get that document in such-and-such a format. Ideally I wouldn''t specify too much whether the controller hits Document.find or Document.find_by_sql or gets it out of some disk cache or gets the data by doing a magical little dance in a faerie circle off in the forest. It''s really not my test''s problem. On the other hand, I do think mocking is extremely useful when you''re dealing with very externalized services with narrow, rigid interfaces that you can''t implicitly test all the time. At work I have to write lots of complex tests around a specific web service, but I don''t have a lot of control over it, so I wrote a fairly complex mock for that service. But even then it''s a different sort of mock: It''s more state- aware than surface-aware, which is part of the point as I see it. Of course, writing those sorts of mocks is much more time-consuming. If you haven''t seen it before, Martin Fowler has a pretty good article about the differences in styles: http://martinfowler.com/ articles/mocksArentStubs.html Francis Hwang http://fhwang.net/ On Dec 16, 2007, at 5:59 PM, Dan North wrote:> Pat. > > I''m going to reply by promising to reply. You''ve asked a ton of > really useful and insightful questions. I can''t do them justice > without sitting down and spending a bunch of time thinking about them. > > I''m going to be off the radar for a bit over Christmas - I''ve had > an insane year and I''ve promised myself (and my wife) some quiet > time. Your questions have a little star next to them in my gmail > inbox, which means at the very least they''ll be ignored less than > the other mail I have to respond to :) > > The one sentence response, though, is that I honestly don''t know > (which is why I need to think about it). I can tell you I think I > isolate services from their dependencies using mocks, I think I > never stub domain objects (I definitely never mock them, but > stubbing them is different), I can''t say how I test layers because > I think we have a different definition of layers. > > The reason I''m being being so vague is that I usually specify > behaviour from the outside in, starting with the "outermost" > objects (the ones that appear in the scenario steps) and working > inwards as I implement each bit of behaviour. That way I discover > service dependencies that I introduce as mocks, and other domain > objects that become, well, domain objects. Then there are other > little classes that fall out of the mix that seem to make sense as > I go along. I don''t usually start out with much more of a strategy > than that. I can''t speak as a tester because I''m not one, so I > can''t really give you a sensible answer for how isolated my tests > are. I simply don''t have tests at that level. At an acceptance > level my scenarios only ever use real objects wired together doing > full end-to-end testing. Sometimes I''ll swap in a lighter-weight > implementation (say using an in-memory database rather than a > remote one, or an in-thread Java web container like Jetty rather > than firing up Tomcat), but all the wiring is still the same (say > JDBC or HTTP-over-the-wire). I''m still not entirely sure how this > maps to Rails, but in Java MVC web apps I would want the controller > examples failing if the model''s behaviour changed in a particular > way, so I can''t think of a reason why I would want fake domain > objects. > > Like I said, I''ll have a proper think and get back to you. > > Cheers, > Dan > > On Dec 15, 2007 7:17 AM, Pat Maddox < pergesu at gmail.com> wrote: > On Dec 8, 2007 4:06 AM, Dan North < tastapod at gmail.com> wrote: > > I prefer the mantra "mock roles, not objects", in other words, > mock things > > that have behaviour (services, components, resources, whatever your > > preferred term is) rather than stubbing out domain objects > themselves. If > > you have to mock domain objects it''s usually a smell that your > domain > > implementation is too tightly coupled to some infrastructure. > > Assuming you could easily write Rails specs using the real domain > objects, but not hit the database, would you "never" mock domain > objects (where "never" means you deviate only in extraordinary > circumstances)? I''m mostly curious in the interaction between > controller and model...if you use real models, then changes to the > model code could very well lead to failing controller specs, even > though the controller''s logic is still correct. > > What is your opinion on isolating tests? Do you try to test each > class in complete isolation, mocking its collaborators? When you use > interaction-based tests, do you always leave those in place, or do you > substitute real objects as you implement them (and if so, do your > changes move to a more state-based style)? How do you approach > specifying different layers within an app? Do you use real > implementations if there are lightweight ones available, or do you > mock everything out? > > I realize that''s a ton of questions...I''d be very grateful (and > impressed!) if you took the time to answer any of them. Also I''d love > to hear input from other people as well. > > Pat > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users
Francis and Pat probably know my thoughts on this, already, but as far as I can see it, mocks (at least the message based ones) are popular for one reason in the rails / active-record world: Speed. Mocks are extremely fast. I don''t think it''s uncommon for those who write specs for rails projects to have a full test suite running in under 20 seconds if they are mocking all dependencies. Primarily, this means using mocks for all associations on a Rails model, and using only mocks for controller specs. The issue of speed seems secondary, but I can already tell how costly a long build cycle is. At work our test suite has about 1800 specs, and takes around 3 minutes to run (and hits the database in all model specs). My coworker actually released something into production on Friday before he left which failed the build. Obviously - this has a serious effect on the end user, who is currently receiving errors. If the test suite took 20 seconds to run, he would be running it all the time, and this error would have never occurred. The fact that they don''t run quickly means that he isn''t going to run them all the time, and will need to rely on a CI server to beep or do something obnoxious like that (which isn''t an option in the shared office space in which we are currently working). Plus - let''s be honest: We use tests as our feedback loop. The tighter we can get this, the closer we can stay to the code. When the specs take over a few minutes to run, we no longer have the luxury of running the whole suite every time we make a little one line change. We are forced to run specs from one file, or a subset of the specs from one file, and we loose a certain amount of feedback on how our code is integrating. Yes - there are other reasons for using mocks - like defining interfaces that don''t exist (or may not exist for a long time); So far the main reason I''ve seen them used is for speed. I think this is a major problem with activerecord - and one which only be solved only by moving to an ORM with a different design pattern (like a true DataMapper). Lafcadio would probably be in the running for this sort of thing - a library which can isolate the database and the API with a middle layer, which can easily be mocked either by a mock library which could be a drop in replacement for the database, or a middle layer which could easily be stubbed out with message based stubs. As always, it''s good to hear this sort of discussion going on. (Francis: It was exactly this sort of discussion that got me involved with RSpec in the first place) Regards, Scott On Dec 16, 2007, at 6:22 PM, Francis Hwang wrote:> Coming to this thread a bit late: > > I think I''m pretty close to Dan, in practice: I''m not a big fan of > fine-grained isolation in writing your tests. The practice seems to > me like it would just bug you down. When I''m writing a behavior for a > particular thing, such as a controller, I don''t want to have to worry > about the precise messages that are passed to its collaborators. I > try to think in a fairly "black box" manner about it: Presupposing > that there''s a given document in a database table, when I make an > HTTP request that''s looking for that document, I should get that > document in such-and-such a format. Ideally I wouldn''t specify too > much whether the controller hits Document.find or > Document.find_by_sql or gets it out of some disk cache or gets the > data by doing a magical little dance in a faerie circle off in the > forest. It''s really not my test''s problem. > On the other hand, I do think mocking is extremely useful when you''re > dealing with very externalized services with narrow, rigid interfaces > that you can''t implicitly test all the time. At work I have to write > lots of complex tests around a specific web service, but I don''t have > a lot of control over it, so I wrote a fairly complex mock for that > service. But even then it''s a different sort of mock: It''s more state- > aware than surface-aware, which is part of the point as I see it. Of > course, writing those sorts of mocks is much more time-consuming. > > If you haven''t seen it before, Martin Fowler has a pretty good > article about the differences in styles: http://martinfowler.com/ > articles/mocksArentStubs.html > > Francis Hwang > http://fhwang.net/ > > > > On Dec 16, 2007, at 5:59 PM, Dan North wrote: > >> Pat. >> >> I''m going to reply by promising to reply. You''ve asked a ton of >> really useful and insightful questions. I can''t do them justice >> without sitting down and spending a bunch of time thinking about >> them. >> >> I''m going to be off the radar for a bit over Christmas - I''ve had >> an insane year and I''ve promised myself (and my wife) some quiet >> time. Your questions have a little star next to them in my gmail >> inbox, which means at the very least they''ll be ignored less than >> the other mail I have to respond to :) >> >> The one sentence response, though, is that I honestly don''t know >> (which is why I need to think about it). I can tell you I think I >> isolate services from their dependencies using mocks, I think I >> never stub domain objects (I definitely never mock them, but >> stubbing them is different), I can''t say how I test layers because >> I think we have a different definition of layers. >> >> The reason I''m being being so vague is that I usually specify >> behaviour from the outside in, starting with the "outermost" >> objects (the ones that appear in the scenario steps) and working >> inwards as I implement each bit of behaviour. That way I discover >> service dependencies that I introduce as mocks, and other domain >> objects that become, well, domain objects. Then there are other >> little classes that fall out of the mix that seem to make sense as >> I go along. I don''t usually start out with much more of a strategy >> than that. I can''t speak as a tester because I''m not one, so I >> can''t really give you a sensible answer for how isolated my tests >> are. I simply don''t have tests at that level. At an acceptance >> level my scenarios only ever use real objects wired together doing >> full end-to-end testing. Sometimes I''ll swap in a lighter-weight >> implementation (say using an in-memory database rather than a >> remote one, or an in-thread Java web container like Jetty rather >> than firing up Tomcat), but all the wiring is still the same (say >> JDBC or HTTP-over-the-wire). I''m still not entirely sure how this >> maps to Rails, but in Java MVC web apps I would want the controller >> examples failing if the model''s behaviour changed in a particular >> way, so I can''t think of a reason why I would want fake domain >> objects. >> >> Like I said, I''ll have a proper think and get back to you. >> >> Cheers, >> Dan >> >> On Dec 15, 2007 7:17 AM, Pat Maddox < pergesu at gmail.com> wrote: >> On Dec 8, 2007 4:06 AM, Dan North < tastapod at gmail.com> wrote: >>> I prefer the mantra "mock roles, not objects", in other words, >> mock things >>> that have behaviour (services, components, resources, whatever your >>> preferred term is) rather than stubbing out domain objects >> themselves. If >>> you have to mock domain objects it''s usually a smell that your >> domain >>> implementation is too tightly coupled to some infrastructure. >> >> Assuming you could easily write Rails specs using the real domain >> objects, but not hit the database, would you "never" mock domain >> objects (where "never" means you deviate only in extraordinary >> circumstances)? I''m mostly curious in the interaction between >> controller and model...if you use real models, then changes to the >> model code could very well lead to failing controller specs, even >> though the controller''s logic is still correct. >> >> What is your opinion on isolating tests? Do you try to test each >> class in complete isolation, mocking its collaborators? When you use >> interaction-based tests, do you always leave those in place, or do >> you >> substitute real objects as you implement them (and if so, do your >> changes move to a more state-based style)? How do you approach >> specifying different layers within an app? Do you use real >> implementations if there are lightweight ones available, or do you >> mock everything out? >> >> I realize that''s a ton of questions...I''d be very grateful (and >> impressed!) if you took the time to answer any of them. Also I''d >> love >> to hear input from other people as well. >> >> Pat >> >> _______________________________________________ >> rspec-users mailing list >> rspec-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/rspec-users > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users
On Dec 16, 2007 7:43 PM, Scott Taylor <mailing_lists at railsnewbie.com> wrote:> > Francis and Pat probably know my thoughts on this, already, but as > far as I can see it, mocks (at least the message based ones) are > popular for one reason in the rails / active-record world: > > Speed. Mocks are extremely fast. I don''t think it''s uncommon for > those who write specs for rails projects to have a full test suite > running in under 20 seconds if they are mocking all dependencies. > Primarily, this means using mocks for all associations on a Rails > model, and using only mocks for controller specs.My experience with AR is that AR itself (mainly object instantiation) is slow, not the queries. Mocking the queries did not result in a worthwhile test run time savings. Rails creates lots of objects, which causes lots of slowness. Its death by a thousand cuts. I guess one could mock out the entire AR object, but I''m not convinced that it would result in large performance benefits in many cases. I''ve tried doing this a couple of times and did not save much time at all. Of course, this was done in view examples on a project that uses Markaby (which is slow). Whatever you do, I recommend taking performance metrics of your suite as you try to diagnose the slowness. The results will probably be surprising.> > The issue of speed seems secondary, but I can already tell how costly > a long build cycle is. At work our test suite has about 1800 specs, > and takes around 3 minutes to run (and hits the database in all model > specs). My coworker actually released something into production on > Friday before he left which failed the build. Obviously - this has a > serious effect on the end user, who is currently receiving errors. > If the test suite took 20 seconds to run, he would be running it all > the time, and this error would have never occurred. The fact that > they don''t run quickly means that he isn''t going to run them all the > time, and will need to rely on a CI server to beep or do something > obnoxious like that (which isn''t an option in the shared office space > in which we are currently working). > > Plus - let''s be honest: We use tests as our feedback loop. The > tighter we can get this, the closer we can stay to the code. When the > specs take over a few minutes to run, we no longer have the luxury of > running the whole suite every time we make a little one line change. > We are forced to run specs from one file, or a subset of the specs > from one file, and we loose a certain amount of feedback on how our > code is integrating. > > Yes - there are other reasons for using mocks - like defining > interfaces that don''t exist (or may not exist for a long time); So > far the main reason I''ve seen them used is for speed. I think this > is a major problem with activerecord - and one which only be solved > only by moving to an ORM with a different design pattern (like a true > DataMapper). Lafcadio would probably be in the running for this sort > of thing - a library which can isolate the database and the API with > a middle layer, which can easily be mocked either by a mock library > which could be a drop in replacement for the database, or a middle > layer which could easily be stubbed out with message based stubs.One thing about AR that hurts is all of the copies of the same record. It would be really nice if there was only one instance of each record in the thread. This would help performance and significantly reduce the need to reload the object.> > As always, it''s good to hear this sort of discussion going on. > (Francis: It was exactly this sort of discussion that got me > involved with RSpec in the first place) > > Regards, > > Scott > > > > > > On Dec 16, 2007, at 6:22 PM, Francis Hwang wrote: > > > Coming to this thread a bit late: > > > > I think I''m pretty close to Dan, in practice: I''m not a big fan of > > fine-grained isolation in writing your tests. The practice seems to > > me like it would just bug you down. When I''m writing a behavior for a > > particular thing, such as a controller, I don''t want to have to worry > > about the precise messages that are passed to its collaborators. I > > try to think in a fairly "black box" manner about it: Presupposing > > that there''s a given document in a database table, when I make an > > HTTP request that''s looking for that document, I should get that > > document in such-and-such a format. Ideally I wouldn''t specify too > > much whether the controller hits Document.find or > > Document.find_by_sql or gets it out of some disk cache or gets the > > data by doing a magical little dance in a faerie circle off in the > > forest. It''s really not my test''s problem. > > On the other hand, I do think mocking is extremely useful when you''re > > dealing with very externalized services with narrow, rigid interfaces > > that you can''t implicitly test all the time. At work I have to write > > lots of complex tests around a specific web service, but I don''t have > > a lot of control over it, so I wrote a fairly complex mock for that > > service. But even then it''s a different sort of mock: It''s more state- > > aware than surface-aware, which is part of the point as I see it. Of > > course, writing those sorts of mocks is much more time-consuming. > > > > If you haven''t seen it before, Martin Fowler has a pretty good > > article about the differences in styles: http://martinfowler.com/ > > articles/mocksArentStubs.html > > > > Francis Hwang > > http://fhwang.net/ > > > > > > > > On Dec 16, 2007, at 5:59 PM, Dan North wrote: > > > >> Pat. > >> > >> I''m going to reply by promising to reply. You''ve asked a ton of > >> really useful and insightful questions. I can''t do them justice > >> without sitting down and spending a bunch of time thinking about > >> them. > >> > >> I''m going to be off the radar for a bit over Christmas - I''ve had > >> an insane year and I''ve promised myself (and my wife) some quiet > >> time. Your questions have a little star next to them in my gmail > >> inbox, which means at the very least they''ll be ignored less than > >> the other mail I have to respond to :) > >> > >> The one sentence response, though, is that I honestly don''t know > >> (which is why I need to think about it). I can tell you I think I > >> isolate services from their dependencies using mocks, I think I > >> never stub domain objects (I definitely never mock them, but > >> stubbing them is different), I can''t say how I test layers because > >> I think we have a different definition of layers. > >> > >> The reason I''m being being so vague is that I usually specify > >> behaviour from the outside in, starting with the "outermost" > >> objects (the ones that appear in the scenario steps) and working > >> inwards as I implement each bit of behaviour. That way I discover > >> service dependencies that I introduce as mocks, and other domain > >> objects that become, well, domain objects. Then there are other > >> little classes that fall out of the mix that seem to make sense as > >> I go along. I don''t usually start out with much more of a strategy > >> than that. I can''t speak as a tester because I''m not one, so I > >> can''t really give you a sensible answer for how isolated my tests > >> are. I simply don''t have tests at that level. At an acceptance > >> level my scenarios only ever use real objects wired together doing > >> full end-to-end testing. Sometimes I''ll swap in a lighter-weight > >> implementation (say using an in-memory database rather than a > >> remote one, or an in-thread Java web container like Jetty rather > >> than firing up Tomcat), but all the wiring is still the same (say > >> JDBC or HTTP-over-the-wire). I''m still not entirely sure how this > >> maps to Rails, but in Java MVC web apps I would want the controller > >> examples failing if the model''s behaviour changed in a particular > >> way, so I can''t think of a reason why I would want fake domain > >> objects. > >> > >> Like I said, I''ll have a proper think and get back to you. > >> > >> Cheers, > >> Dan > >> > >> On Dec 15, 2007 7:17 AM, Pat Maddox < pergesu at gmail.com> wrote: > >> On Dec 8, 2007 4:06 AM, Dan North < tastapod at gmail.com> wrote: > >>> I prefer the mantra "mock roles, not objects", in other words, > >> mock things > >>> that have behaviour (services, components, resources, whatever your > >>> preferred term is) rather than stubbing out domain objects > >> themselves. If > >>> you have to mock domain objects it''s usually a smell that your > >> domain > >>> implementation is too tightly coupled to some infrastructure. > >> > >> Assuming you could easily write Rails specs using the real domain > >> objects, but not hit the database, would you "never" mock domain > >> objects (where "never" means you deviate only in extraordinary > >> circumstances)? I''m mostly curious in the interaction between > >> controller and model...if you use real models, then changes to the > >> model code could very well lead to failing controller specs, even > >> though the controller''s logic is still correct. > >> > >> What is your opinion on isolating tests? Do you try to test each > >> class in complete isolation, mocking its collaborators? When you use > >> interaction-based tests, do you always leave those in place, or do > >> you > >> substitute real objects as you implement them (and if so, do your > >> changes move to a more state-based style)? How do you approach > >> specifying different layers within an app? Do you use real > >> implementations if there are lightweight ones available, or do you > >> mock everything out? > >> > >> I realize that''s a ton of questions...I''d be very grateful (and > >> impressed!) if you took the time to answer any of them. Also I''d > >> love > >> to hear input from other people as well. > >> > >> Pat > >> > >> _______________________________________________ > >> rspec-users mailing list > >> rspec-users at rubyforge.org > >> http://rubyforge.org/mailman/listinfo/rspec-users > > > > _______________________________________________ > > rspec-users mailing list > > rspec-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/rspec-users > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
For me, this has certainly been the most enjoyable and interesting part of using RSpec - finding answers to these questions in a context that suits the project. Of course, i am new to this, but have found an approach that works well for my current project. However, my approach is wide open to review and improvement and will no doubt evolve well beyond its current scope in future. I am still reading and re-reading Dan''s previous mail regarding the Builder pattern as it is very elegant, although i am not using now as i feel that it could introduce a little too much overhead to maintain the Builders. I am also considering Dan''s mantra and with more RSpec experience i''ll gain a better insight into how this can work in rails. Here''s what i''m doing that works well for our project: Mocking * In views and controllers I always use mocks and stub out any responses that i spot or are flagged up by autotest (yep, autotest is ace for highlighting those methods that need stubbing) * In models, i only use real models for the model specific to the test. I always mock all other interacting models - so in a test for a project with many tasks, the tasks model is mocked and then stubbed. * I only define the expectation for any mock or real model in one place. So, in our app, the expected definition of a Project is defined once and that definition is used by all tests that use that object, from views to models. It''s default values can be overwritten, but the expectation is set for all uses. More info below: Factories So far, I am finding that a factory class is offering a useful *glue* between the intentionally separated unit tests. So, even though all tests are isolated from each other with mocks, they still share an expectation of what any used mock should look like. This enables me to be aware of the system wide impact of a change to a small component of the system. I fully accept that this should be covered by integration testing and not unit testing, but on a quick project, i am not sure i can justify (not yet at least) the time to write unit tests and then integration tests, especially as the test team will go at the app with Scellenium. As i say, mine is an evolving platform :-) Here''s how we are using a factory. I hope it helps and that i don''t get too grilled for the design and implementation :-) ### # The factory class houses the expected # definition of each object and returns # mocks or real models depending on the request # It''s attribute values (but not keys) can be overwritten # See ''validate_attributes'' method below ### module Factory def self.create_project(attributes = {}, mock = false) @default_attributes = { :name => "Mock Project", :synopsis => "Mock Project Synopsis" } create_object(attributes,mock,Project) end private def self.create_object(custom_attributes,mock,object_type) validate_attributes(custom_attributes) attributes = @default_attributes.merge(custom_attributes) if mock attributes.each_pair do |key, value| mock.stub!(key).and_return(value) end mock else mock = object_type.create attributes end end ### # The following method validates that any received # custom attribute''s key is in the expected attribute # list for the object. If not, the test fails, forcing # the developer to keep the factory defaults up to # date with any changes ### def self.validate_attributes(attributes) attributes.each_key {|a| raise "Unrecognised attribute ''#{a}'' was passed into the Factory" if !@default_attributes.has_key?(a)} true end end ### # Projects controller test interacts # with the Factory and receives mocks ### describe ProjectsController do include Factory before(:each) do @project1 = Factory.create_project({}, mock_model(Project)), @project2 = Factory.create_project({:name => "My second project", :synopsis => "This is another fantastic project"}, mock_model(Project)) @projects = [@project1, at project2] end end ### # The Project model test interacts with # the Factory and receives real models ### describe Project do include Factory before(:each) do Project.destroy_all #Real project @project = Factory.create_project #Stub Role @role = Factory.create_role({},mock_model(Role)) @role.stub!(:quoted_id).and_return(true) @role.stub!(:[]=).and_return(true) @role.stub!(:save).and_return(true) end end
On Dec 17, 2007 5:10 AM, Andy Goundry <andy at adveho.net> wrote:> I am also considering Dan''s mantraDan''s mantra of "mock roles, not objects" comes from http://www.jmock.org/oopsla2004.pdf, a paper of the same name. My read on this differs from Dan''s a bit. I''ll follow up on that later, but you might want to give it a read and form your own opinion before I poison you with mine :)
On Dec 17, 2007, at 3:25 AM, Brian Takita wrote:> On Dec 16, 2007 7:43 PM, Scott Taylor > <mailing_lists at railsnewbie.com> wrote: >> >> Francis and Pat probably know my thoughts on this, already, but as >> far as I can see it, mocks (at least the message based ones) are >> popular for one reason in the rails / active-record world: >> >> Speed. Mocks are extremely fast. I don''t think it''s uncommon for >> those who write specs for rails projects to have a full test suite >> running in under 20 seconds if they are mocking all dependencies. >> Primarily, this means using mocks for all associations on a Rails >> model, and using only mocks for controller specs. > My experience with AR is that AR itself (mainly object instantiation) > is slow, not the queries. > Mocking the queries did not result in a worthwhile test run time > savings. > Rails creates lots of objects, which causes lots of slowness. Its > death by a thousand cuts. > > I guess one could mock out the entire AR object, but I''m not convinced > that it would result in large performance benefits in many cases. > I''ve tried doing this a couple of times and did not save much time at > all. Of course, this was done in view examples on a project that uses > Markaby (which is slow). > > Whatever you do, I recommend taking performance metrics of your suite > as you try to diagnose the slowness. The results will probably be > surprising.Certainly. A lesson in premature optimization. Although, I did notice that my test suite took about half the time with an in-memory sqllite3 database, so I would find it hard to believe that most of the time is spent in object creation - but...off to do some benchmarking. Scott
On Dec 17, 2007 11:02 AM, Scott Taylor <mailing_lists at railsnewbie.com> wrote:> > > On Dec 17, 2007, at 3:25 AM, Brian Takita wrote: > > > On Dec 16, 2007 7:43 PM, Scott Taylor > > <mailing_lists at railsnewbie.com> wrote: > >> > >> Francis and Pat probably know my thoughts on this, already, but as > >> far as I can see it, mocks (at least the message based ones) are > >> popular for one reason in the rails / active-record world: > >> > >> Speed. Mocks are extremely fast. I don''t think it''s uncommon for > >> those who write specs for rails projects to have a full test suite > >> running in under 20 seconds if they are mocking all dependencies. > >> Primarily, this means using mocks for all associations on a Rails > >> model, and using only mocks for controller specs. > > My experience with AR is that AR itself (mainly object instantiation) > > is slow, not the queries. > > Mocking the queries did not result in a worthwhile test run time > > savings. > > Rails creates lots of objects, which causes lots of slowness. Its > > death by a thousand cuts. > > > > I guess one could mock out the entire AR object, but I''m not convinced > > that it would result in large performance benefits in many cases. > > I''ve tried doing this a couple of times and did not save much time at > > all. Of course, this was done in view examples on a project that uses > > Markaby (which is slow). > > > > Whatever you do, I recommend taking performance metrics of your suite > > as you try to diagnose the slowness. The results will probably be > > surprising. > > Certainly. A lesson in premature optimization. Although, I did > notice that > my test suite took about half the time with an in-memory sqllite3 > database, > so I would find it hard to believe that most of the time is spent in > object > creation - but...off to do some benchmarking.True. I also did some custom fixture optimizations. For some reason, instantiating a Fixture object instance is very slow. I''ve rigged it so there is only one instance of a Fixture object for each table for the entire process. Of course this would break fixture scenarios. I''ve had around 20-30% increases using in memory sqllite, about 1 year ago. I havn''t tried it since.> > Scott > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On Dec 17, 2007 3:02 PM, Brian Takita <brian.takita at gmail.com> wrote:> > On Dec 17, 2007 11:02 AM, Scott Taylor <mailing_lists at railsnewbie.com> wrote: > > > > > > On Dec 17, 2007, at 3:25 AM, Brian Takita wrote: > > > > > On Dec 16, 2007 7:43 PM, Scott Taylor > > > <mailing_lists at railsnewbie.com> wrote: > > >> > > >> Francis and Pat probably know my thoughts on this, already, but as > > >> far as I can see it, mocks (at least the message based ones) are > > >> popular for one reason in the rails / active-record world: > > >> > > >> Speed. Mocks are extremely fast. I don''t think it''s uncommon for > > >> those who write specs for rails projects to have a full test suite > > >> running in under 20 seconds if they are mocking all dependencies. > > >> Primarily, this means using mocks for all associations on a Rails > > >> model, and using only mocks for controller specs. > > > My experience with AR is that AR itself (mainly object instantiation) > > > is slow, not the queries. > > > Mocking the queries did not result in a worthwhile test run time > > > savings. > > > Rails creates lots of objects, which causes lots of slowness. Its > > > death by a thousand cuts. > > > > > > I guess one could mock out the entire AR object, but I''m not convinced > > > that it would result in large performance benefits in many cases. > > > I''ve tried doing this a couple of times and did not save much time at > > > all. Of course, this was done in view examples on a project that uses > > > Markaby (which is slow). > > > > > > Whatever you do, I recommend taking performance metrics of your suite > > > as you try to diagnose the slowness. The results will probably be > > > surprising. > > > > Certainly. A lesson in premature optimization. Although, I did > > notice that > > my test suite took about half the time with an in-memory sqllite3 > > database, > > so I would find it hard to believe that most of the time is spent in > > object > > creation - but...off to do some benchmarking. > True. I also did some custom fixture optimizations. For some reason, > instantiating a Fixture object instance is very slow. I''ve rigged it > so there is only one instance of a Fixture object for each table for > the entire process. > Of course this would break fixture scenarios.Did you do that in rspec? Or in your own project?> > I''ve had around 20-30% increases using in memory sqllite, about 1 year > ago. I havn''t tried it since. > > > > > Scott > > > > > > _______________________________________________ > > rspec-users mailing list > > rspec-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/rspec-users > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On Dec 17, 2007 1:08 PM, David Chelimsky <dchelimsky at gmail.com> wrote:> > On Dec 17, 2007 3:02 PM, Brian Takita <brian.takita at gmail.com> wrote: > > > > On Dec 17, 2007 11:02 AM, Scott Taylor <mailing_lists at railsnewbie.com> wrote: > > > > > > > > > On Dec 17, 2007, at 3:25 AM, Brian Takita wrote: > > > > > > > On Dec 16, 2007 7:43 PM, Scott Taylor > > > > <mailing_lists at railsnewbie.com> wrote: > > > >> > > > >> Francis and Pat probably know my thoughts on this, already, but as > > > >> far as I can see it, mocks (at least the message based ones) are > > > >> popular for one reason in the rails / active-record world: > > > >> > > > >> Speed. Mocks are extremely fast. I don''t think it''s uncommon for > > > >> those who write specs for rails projects to have a full test suite > > > >> running in under 20 seconds if they are mocking all dependencies. > > > >> Primarily, this means using mocks for all associations on a Rails > > > >> model, and using only mocks for controller specs. > > > > My experience with AR is that AR itself (mainly object instantiation) > > > > is slow, not the queries. > > > > Mocking the queries did not result in a worthwhile test run time > > > > savings. > > > > Rails creates lots of objects, which causes lots of slowness. Its > > > > death by a thousand cuts. > > > > > > > > I guess one could mock out the entire AR object, but I''m not convinced > > > > that it would result in large performance benefits in many cases. > > > > I''ve tried doing this a couple of times and did not save much time at > > > > all. Of course, this was done in view examples on a project that uses > > > > Markaby (which is slow). > > > > > > > > Whatever you do, I recommend taking performance metrics of your suite > > > > as you try to diagnose the slowness. The results will probably be > > > > surprising. > > > > > > Certainly. A lesson in premature optimization. Although, I did > > > notice that > > > my test suite took about half the time with an in-memory sqllite3 > > > database, > > > so I would find it hard to believe that most of the time is spent in > > > object > > > creation - but...off to do some benchmarking. > > True. I also did some custom fixture optimizations. For some reason, > > instantiating a Fixture object instance is very slow. I''ve rigged it > > so there is only one instance of a Fixture object for each table for > > the entire process. > > Of course this would break fixture scenarios. > > Did you do that in rspec? Or in your own project?My own project. I overrode Test::Unit::TestCase @@already_loaded_fixtures with a shim.> > > > > > I''ve had around 20-30% increases using in memory sqllite, about 1 year > > ago. I havn''t tried it since. > > > > > > > > Scott > > > > > > > > > _______________________________________________ > > > rspec-users mailing list > > > rspec-users at rubyforge.org > > > http://rubyforge.org/mailman/listinfo/rspec-users > > > > > _______________________________________________ > > rspec-users mailing list > > rspec-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/rspec-users > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
> True. I also did some custom fixture optimizations. For some reason, > instantiating a Fixture object instance is very slow. I''ve rigged it > so there is only one instance of a Fixture object for each table for > the entire process. > Of course this would break fixture scenarios. > > I''ve had around 20-30% increases using in memory sqllite, about 1 year > ago. I havn''t tried it since.Interesting. I''m not using Fixtures, so I guess this isn''t an option for me. (I need to figure out a way to speed up FixtureReplacement). What was so slow in the fixture instantiation? Scott
On Dec 17, 2007 1:29 PM, Scott Taylor <mailing_lists at railsnewbie.com> wrote:> > > > True. I also did some custom fixture optimizations. For some reason, > > instantiating a Fixture object instance is very slow. I''ve rigged it > > so there is only one instance of a Fixture object for each table for > > the entire process. > > Of course this would break fixture scenarios. > > > > I''ve had around 20-30% increases using in memory sqllite, about 1 year > > ago. I havn''t tried it since. > > Interesting. I''m not using Fixtures, so I guess this isn''t an option > for me. (I need to figure out a way to speed up FixtureReplacement). > > What was so slow in the fixture instantiation?I didn''t isolate what about fixture instantiation was slow. It reads the yaml files and converts the hash into objects. All I know is when I did the optimization, I got around a 30% performance increase when loading all fixtures in all Examples.> > > Scott > > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
On Dec 17, 2007, at 4:42 PM, Brian Takita wrote:> On Dec 17, 2007 1:29 PM, Scott Taylor > <mailing_lists at railsnewbie.com> wrote: >> >> >>> True. I also did some custom fixture optimizations. For some reason, >>> instantiating a Fixture object instance is very slow. I''ve rigged it >>> so there is only one instance of a Fixture object for each table for >>> the entire process. >>> Of course this would break fixture scenarios. >>> >>> I''ve had around 20-30% increases using in memory sqllite, about 1 >>> year >>> ago. I havn''t tried it since. >> >> Interesting. I''m not using Fixtures, so I guess this isn''t an option >> for me. (I need to figure out a way to speed up FixtureReplacement). >> >> What was so slow in the fixture instantiation? > I didn''t isolate what about fixture instantiation was slow. It reads > the yaml files and converts the hash into objects. > All I know is when I did the optimization, I got around a 30% > performance increase when loading all fixtures in all Examples.I assume you were using instantiated fixtures, and not transactional fixtures? Scott
On Dec 17, 2007 2:08 PM, Scott Taylor <mailing_lists at railsnewbie.com> wrote:> > On Dec 17, 2007, at 4:42 PM, Brian Takita wrote: > > > On Dec 17, 2007 1:29 PM, Scott Taylor > > <mailing_lists at railsnewbie.com> wrote: > >> > >> > >>> True. I also did some custom fixture optimizations. For some reason, > >>> instantiating a Fixture object instance is very slow. I''ve rigged it > >>> so there is only one instance of a Fixture object for each table for > >>> the entire process. > >>> Of course this would break fixture scenarios. > >>> > >>> I''ve had around 20-30% increases using in memory sqllite, about 1 > >>> year > >>> ago. I havn''t tried it since. > >> > >> Interesting. I''m not using Fixtures, so I guess this isn''t an option > >> for me. (I need to figure out a way to speed up FixtureReplacement). > >> > >> What was so slow in the fixture instantiation? > > I didn''t isolate what about fixture instantiation was slow. It reads > > the yaml files and converts the hash into objects. > > All I know is when I did the optimization, I got around a 30% > > performance increase when loading all fixtures in all Examples. > > I assume you were using instantiated fixtures, and not transactional > fixtures?I was using transactional fixtures. This was before Rails 2.0 fixture optimizations, so I''m not sure if the same applies today.> > > Scott > > _______________________________________________ > rspec-users mailing list > rspec-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
I know the questioned are directed towards Dan so I hope you don''t me chiming in. My comments are inline. On Dec 15, 2007 2:17 AM, Pat Maddox <pergesu at gmail.com> wrote:> > On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote: > > I prefer the mantra "mock roles, not objects", in other words, mock things > > that have behaviour (services, components, resources, whatever your > > preferred term is) rather than stubbing out domain objects themselves. If > > you have to mock domain objects it''s usually a smell that your domain > > implementation is too tightly coupled to some infrastructure. > > Assuming you could easily write Rails specs using the real domain > objects, but not hit the database, would you "never" mock domain > objects (where "never" means you deviate only in extraordinary > circumstances)? I''m mostly curious in the interaction between > controller and model...if you use real models, then changes to the > model code could very well lead to failing controller specs, even > though the controller''s logic is still correct.In Java you don''t mock domain objects because you want to program toward an interface rather then a single concrete implementation. Conceptually this still applies in Ruby, but because of differences between the languages it isn''t a 1 to 1 mapping in practice. With regard to controllers and models in Rails, I don''t want my controller spec to use real model objects. The requirement of a model existing can be found as part of the discovery process for what a controller needs to do its job. If the implementation of a model is wrong it isn''t the job of the controller spec to report the failure. It''s the job of the model spec or an integration test (if its an integration related issue) to report the failure. When you make it a job of the controller spec to ensure that the real model objects work correctly within a controller it is usually because there is a lack of integration tests and controller specs are being used to fill in the void. Also, controllers can achieve a better level of programming toward the "interface" rather then a concrete class by using dependency injection. For example consider using lighight DI using the injection plugin (http://atomicobjectrb.rubyforge.org/injection/): class PhotosController < ApplicationController inject :project_repository def index @projects = @project_repository.find_projects end end There is a config/objects.yml file which exists to define what project_repository is: --- project_repository: use_class_directly: true class: Project This removes any unneeded coupling between the controller and the model. Although the most common thing I''ve seen in Rails is to partial mock the Project class in your spec. Although this works there is unnecessary coupling between your controller a concrete model class.> > > What is your opinion on isolating tests?Tests should be responsible for ensuring an object works as expected. So it''s usually a good thing to isolate objects under test to ensure that they are working as expected. If you don''t isolate then you end up with a lot of little integration tests. Now when one implementation is wrong you get 100 test failures rather then 1 or 2, which can be a headache when you''re trying to find out why something failed.> Do you try to test each > class in complete isolation, mocking its collaborators?Yes. The pattern I find I follow in testing is that objects whose job it is to coordinate or manage other objects (like controllers, presenters, managers, etc) are always tested in isolation. Interaction-based testing is the key here. These objects can be considered branch objects. They connect to other branches or to leaf node objects. Leaf node objects are the end of the line and they do the actual work. Here is where I use state based testing. I consider ActiveRecord models leaf nodes. A practice that I''ve been following that was inspired from a coworker has been that an object should be a branch or a leaf, but not both. Most Rails applications don''t follow anything like this and it''s common to find bloated controllers and bloated models (most people IMO do not understand the Skinny Controller, Fat Model post, bloated models are now becoming an up and coming trend unfortunately). Objects and methods built-in the language, standard library or framework are exempt from my above statements. If a manager coordinates return values from methods being called other objects and pushes them onto an array, I don''t mock a call to Array.new.> When you use > interaction-based tests, do you always leave those in place, or do you > substitute real objects as you implement them (and if so, do your > changes move to a more state-based style)?Leave the mocked out collaborators in place. An interaction based test verifies that the correct interaction takes place. As soon as you remove the mock and substitute it with a real object your test has become compromised. It''s no longer verifying the correct interaction occurs, it now only makes sure your test doesn''t die with a real object. If you do substitute in a real object, the only way you would be able to maintain the integrity of the test is to partial mock your real object to expect the right methods to be called. This will ensure that the interaction continues to take place. But what happens is that the test gets muddied up with things that don''t need to be there.> How do you approach > specifying different layers within an app?One way to think about this is is terms of composition and inheritance. When layers interact using composition you treat it and test it differently then if you use inheritance. For example a ProjectsController using a @project_repository (see injection example above) or a Project model subclassing ActiveRecord::Base. I need to think about them some more though.> Do you use real > implementations if there are lightweight ones available, or do you > mock everything out?For me it depends. With most Rails projects I''ve worked on there has been one suite of integration tests against the application as a whole and then a bunch of unit tests. The times this has differed are when the application relied on third-party services. These services would be replaced with dummy or lightweight implementations for my integration tests (for example geocoding). Although there would be another set of integration tests to specifically test our app against the actual service. An integration test should test that real objects are working together correctly produce the intended system behavior. You should never mock objects out at this level, but you may need to provide stub implementations for third party services.> > I realize that''s a ton of questions...I''d be very grateful (and > impressed!) if you took the time to answer any of them. Also I''d love > to hear input from other people as well. >It''s too bad we can''t just stand at a whiteboard and talk this out. The answers to these questions could fill a book and email hardly does it justice to provide clear, coherent and complete answers. Not that my response are "answers" to your questions, but it''s how I think about testing and TDD. -- Zach Dennis http://www.continuousthinking.com http://www.atomicobject.com
> If the > implementation of a model is > wrong it isn''t the job of the controller spec to > report the failure. > It''s the job of the model spec or an integration > test (if its an > integration related issue) to report the failure.It seems that it would be very easy to change a model, thereby breaking the controller, and not realize it. Let''s say that we decide to change the implementation of a model, how do you then go about finding the controllers that need to be updated? I know this is the classic argument between classicists and mockists, but I don''t see the benefit of this type of strict mocking. If the integration test is required then what benefit are we getting from the mock and is it worth the cost?> > Also, controllers can achieve a better level of > programming toward the > "interface" rather then a concrete class by using > dependency > injection.I don''t see any reason to use DI in a dynamic language like ruby. I also see no reason in this specific case. Let''s assume we''re working on a rails social networking site. If we have a Blog controller and a Blog model class there is no reason to use DI to inject the blog model in the blog controller. It isn''t removing unneeded coupling, it''s adding unneeded complexity. In java this injection is necessary to make things like testing easier, but it is wholly unnecessary in a language like ruby.> If you don''t > isolate then you end > up with a lot of little integration tests. Now when > one implementation > is wrong you get 100 test failures rather then 1 or > 2, which can be a > headache when you''re trying to find out why > something failed.This has never been a headache for me. If you run your tests often you''ll know what was changed recently and it''s trivial to find the problem. Also, if you run localized tests frequently you''ll see the error without seeing the failures that it causes through out the test suite and you still get the benefit of mini integration tests ;) In all honesty I''m trying to ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
On 26 dec 2007, at 07:26, Jay Donnell wrote:> In all honesty I''m trying to > > ____________________________________________________________________________________ > Be a better friend, newshound, and > know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJYou''re trying what? :) gr, bartz
On Dec 26, 2007 1:26 AM, Jay Donnell <jaydonnell at yahoo.com> wrote:> > If the > > implementation of a model is > > wrong it isn''t the job of the controller spec to > > report the failure. > > It''s the job of the model spec or an integration > > test (if its an > > integration related issue) to report the failure. > > It seems that it would be very easy to change a model, > thereby breaking the controller, and not realize it. > Let''s say that we decide to change the implementation > of a model, how do you then go about finding the > controllers that need to be updated?The integration test will die if you broke functionality.> I know this is > the classic argument between classicists and mockists, > but I don''t see the benefit of this type of strict > mocking. If the integration test is required then what > benefit are we getting from the mock and is it worth > the cost? >At either level an integration is required. I prefer extracting it out into its own test so I can simplify (by which I mean isolate) my controller. The other option is to give the test of the controller the dual responsibility of testing the controller works correctly by itself and also that it works correctly with real models. By isolating the controller and doing interaction-based testing I find that I end up with simpler controllers and more simple objects. I think this is because my tests become increasingly painful to write with the more crap I try to shove on my controller. I have learned to listen to them and start extracting out other objects when my tests become painful because it''s usually a sign. I also prefer acceptance test driven development, which is TDD on top down development so interaction-based testing is important since the model is usually one of the last things I create.> > > > > Also, controllers can achieve a better level of > > programming toward the > > "interface" rather then a concrete class by using > > dependency > > injection. > > I don''t see any reason to use DI in a dynamic language > like ruby. I also see no reason in this specific case. > Let''s assume we''re working on a rails social > networking site. If we have a Blog controller and a > Blog model class there is no reason to use DI to > inject the blog model in the blog controller. It isn''t > removing unneeded coupling, it''s adding unneeded > complexity. In java this injection is necessary to > make things like testing easier, but it is wholly > unnecessary in a language like ruby.This is the Jamis Buck argument. DI is unneeded in Ruby as it is implemented in Java. Needle and Copland are Java implementations in Ruby and they should be avoided. I do not agree that DI is wholly unneeded. In my experience the Injection library has been very lightweight and it has worked well in my controllers for Rails apps. The only way to get around DI is to have every class/module know about every other class/module it deals with OR to reopen classes and override methods which would supply an object. Both of these have their shortcomings. I am not advocating using DI for the sake of DI, but it can be useful. For example, I often extract out date, authentication, etc. helpers and managers. So in my BlogsController there may be a reference to the Blog model because as you say it is not unneeded coupling, however my BlogsController requires authentication and rather then dealing with a LoginManager directly it deals with a @login_manager. Having my BlogsController know about the LoginManager implementation is unneeded coupling. It needs to be able to authenticate, it doesn''t need to know which implementation it uses to authenticate.>From a development perspective you end up with a declarative list ofobjects your implementation will rely on. It''s highly readable what your controller depends to do its job. This is a supporting +1 in my opinion.> > > > If you don''t > > isolate then you end > > up with a lot of little integration tests. Now when > > one implementation > > is wrong you get 100 test failures rather then 1 or > > 2, which can be a > > headache when you''re trying to find out why > > something failed. > > This has never been a headache for me. If you run your > tests often you''ll know what was changed recently and > it''s trivial to find the problem. Also, if you run > localized tests frequently you''ll see the error > without seeing the failures that it causes through out > the test suite and you still get the benefit of mini > integration tests ;)I agree we should be running tests frequently. One of the things you didn''t hit up is how you test objects which coordinate interactions vs those that do the work? Those branch/leaf object scenarios. How do you see testing those? Do see the separation of testing concerns as non-existent because only doing state-based testing will cause every failure (even when an object is working correctly, but the objects its coordinating are broken)? I guess if the LoginManager is working correctly it seems wrong in principle and practice to have it be red if the User object is working is broken. -- Zach Dennis http://www.continuousthinking.com
My first email sounded more certain that I intended. I''m trying to work out my own views by throwing out ideas in a peer review fashion.> The integration test will die if you broke > functionality.I''m curious what you mean by ''integration test''. With regards to rails, are your integration tests always testing the full stack and verifying against the returned html/xml, i.e the verifying against the UI. If so, I find it exceptionally harder testing against the UI than testing against the controller directly and for many of my apps I don''t do comprehensive UI tests because they''ve bitten me too many times in the past and I haven''t figured out a better way to do them. This is one of the reasons that I like having my controller tests also be mini integration tests.> By isolating the controller and doing > interaction-based testing I find > that I end up with simpler controllers and more > simple objects.Yeah, I''ve had this experience as well, but this begs the question, are these interaction based tests really tests? I mean, they aren''t really testing anything except assumptions that could easily be false. To me it feels like a great design tool that isn''t really testing anything.> I also prefer acceptance test driven development, > which is TDD on top > down development so interaction-based testing is > important since the > model is usually one of the last things I create.I''m trying out this approach with a flex application I''m making that has a rails backend. So far so good, and the flex UI only talks to the rails portion via restful calls. You can do top down by mocking the backend with simple flat xml files which you then implement after the UI is completed against the mocks.> I am not advocating using DI for the sake of DI, but > it can be useful.I agree, what I was trying to say is that it''s the exception rather than the rule for me personally.> One of the things you didn''t hit up is how you test > objects which > coordinate interactions vs those that do the work? > Those branch/leaf > object scenarios. How do you see testing those? Do > see the separation > of testing concerns as non-existent because only > doing state-based > testing will cause every failure (even when an > object is working > correctly, but the objects its coordinating are > broken)?This may be a matter of scale. The largest rails app I''ve written is under 10k lines of production code with a bit over 10k lines of test code. This is relatively small, and at this level causing "every failure" hasn''t posed a real problem. One of the lessons from the java world is that techniques which are needed for large scale applications are often a hinderance to small teams working on a much smaller scale. Back to your question, how would a coordinated object get broken? I can see this happening in a large team, but with small teams it''s rare. And, when it does happen it''s nice getting notified from the controllers test. Basically, my controller tests are my integration tests which gets back to my earlier question about how you do integration tests. Jay ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
On Dec 23, 2007 2:27 PM, Zach Dennis <zach.dennis at gmail.com> wrote:> I know the questioned are directed towards Dan so I hope you don''t me > chiming in. My comments are inline.Thanks a lot for your comments, I really appreciate them. I''ve been dying to respond to this for the past several days, but haven''t had internet access.> On Dec 15, 2007 2:17 AM, Pat Maddox <pergesu at gmail.com> wrote: > > > > On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote: > > > I prefer the mantra "mock roles, not objects", in other words, mock things > > > that have behaviour (services, components, resources, whatever your > > > preferred term is) rather than stubbing out domain objects themselves. If > > > you have to mock domain objects it''s usually a smell that your domain > > > implementation is too tightly coupled to some infrastructure. > > > > Assuming you could easily write Rails specs using the real domain > > objects, but not hit the database, would you "never" mock domain > > objects (where "never" means you deviate only in extraordinary > > circumstances)? I''m mostly curious in the interaction between > > controller and model...if you use real models, then changes to the > > model code could very well lead to failing controller specs, even > > though the controller''s logic is still correct. > > In Java you don''t mock domain objects because you want to program > toward an interface rather then a single concrete implementation. > Conceptually this still applies in Ruby, but because of differences > between the languages it isn''t a 1 to 1 mapping in practice.I must be misunderstanding you here, because you say you "don''t mock domain objects," and the rest of your email suggests that you mock basically everything. <snip>> > Do you try to test each > > class in complete isolation, mocking its collaborators? > > Yes. The pattern I find I follow in testing is that objects whose job > it is to coordinate or manage other objects (like controllers, > presenters, managers, etc) are always tested in isolation. > Interaction-based testing is the key here. These objects can be > considered branch objects. They connect to other branches or to leaf > node objects. > > Leaf node objects are the end of the line and they do the actual work. > Here is where I use state based testing. I consider ActiveRecord > models leaf nodes.What about interactions between ActiveRecord objects. If a User has_many Subscriptions, do you mock out those interactions? Would you still mock them out if User and Subscription were PROs (plain Ruby objects) and persistence were handled separately?> > When you use > > interaction-based tests, do you always leave those in place, or do you > > substitute real objects as you implement them (and if so, do your > > changes move to a more state-based style)? > > Leave the mocked out collaborators in place. An interaction based test > verifies that the correct interaction takes place. As soon as you > remove the mock and substitute it with a real object your test has > become compromised. It''s no longer verifying the correct interaction > occurs, it now only makes sure your test doesn''t die with a real > object.This leads to perhaps a more subtle case of my previous question...ActiveRecord relies pretty heavily on the real classes of objects. To me, this means that it would make more sense to use mocks if you didn''t use AR, but use the real objects when you are using AR. Again, this is only between model objects. I agree that controller specs should mock them all out.> > I realize that''s a ton of questions...I''d be very grateful (and > > impressed!) if you took the time to answer any of them. Also I''d love > > to hear input from other people as well. > > > > It''s too bad we can''t just stand at a whiteboard and talk this out. > The answers to these questions could fill a book and email hardly does > it justice to provide clear, coherent and complete answers. Not that > my response are "answers" to your questions, but it''s how I think > about testing and TDD.Thanks again for your thoughtful reply. Looking forward to hearing a little bit more. Pat
On Dec 26, 2007 3:23 PM, Pat Maddox <pergesu at gmail.com> wrote:> On Dec 23, 2007 2:27 PM, Zach Dennis <zach.dennis at gmail.com> wrote: > > I know the questioned are directed towards Dan so I hope you don''t me > > chiming in. My comments are inline. > > Thanks a lot for your comments, I really appreciate them. I''ve been > dying to respond to this for the past several days, but haven''t had > internet access. > > > > On Dec 15, 2007 2:17 AM, Pat Maddox <pergesu at gmail.com> wrote: > > > > > > On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote: > > > > I prefer the mantra "mock roles, not objects", in other words, mock things > > > > that have behaviour (services, components, resources, whatever your > > > > preferred term is) rather than stubbing out domain objects themselves. If > > > > you have to mock domain objects it''s usually a smell that your domain > > > > implementation is too tightly coupled to some infrastructure. > > > > > > Assuming you could easily write Rails specs using the real domain > > > objects, but not hit the database, would you "never" mock domain > > > objects (where "never" means you deviate only in extraordinary > > > circumstances)? I''m mostly curious in the interaction between > > > controller and model...if you use real models, then changes to the > > > model code could very well lead to failing controller specs, even > > > though the controller''s logic is still correct. > > > > In Java you don''t mock domain objects because you want to program > > toward an interface rather then a single concrete implementation. > > Conceptually this still applies in Ruby, but because of differences > > between the languages it isn''t a 1 to 1 mapping in practice. > > I must be misunderstanding you here, because you say you "don''t mock > domain objects," and the rest of your email suggests that you mock > basically everything. >I''ll respond more later, but wanted to point out that that should be "In Java you mock domain objects because you want to ...". I don''t know how the "n''t" got in there. ;) Zach Dennis http://www.continuousthinking.com
On Dec 26, 2007 3:23 PM, Pat Maddox <pergesu at gmail.com> wrote:> On Dec 23, 2007 2:27 PM, Zach Dennis <zach.dennis at gmail.com> wrote: > > I know the questioned are directed towards Dan so I hope you don''t me > > chiming in. My comments are inline. > > Thanks a lot for your comments, I really appreciate them. I''ve been > dying to respond to this for the past several days, but haven''t had > internet access. > > > > On Dec 15, 2007 2:17 AM, Pat Maddox <pergesu at gmail.com> wrote: > > > > > > On Dec 8, 2007 4:06 AM, Dan North <tastapod at gmail.com> wrote: > > > > I prefer the mantra "mock roles, not objects", in other words, mock things > > > > that have behaviour (services, components, resources, whatever your > > > > preferred term is) rather than stubbing out domain objects themselves. If > > > > you have to mock domain objects it''s usually a smell that your domain > > > > implementation is too tightly coupled to some infrastructure. > > > > > > Assuming you could easily write Rails specs using the real domain > > > objects, but not hit the database, would you "never" mock domain > > > objects (where "never" means you deviate only in extraordinary > > > circumstances)? I''m mostly curious in the interaction between > > > controller and model...if you use real models, then changes to the > > > model code could very well lead to failing controller specs, even > > > though the controller''s logic is still correct. > > > > In Java you don''t mock domain objects because you want to program > > toward an interface rather then a single concrete implementation. > > Conceptually this still applies in Ruby, but because of differences > > between the languages it isn''t a 1 to 1 mapping in practice. > > I must be misunderstanding you here, because you say you "don''t mock > domain objects," and the rest of your email suggests that you mock > basically everything. > > <snip> > > > > Do you try to test each > > > class in complete isolation, mocking its collaborators? > > > > Yes. The pattern I find I follow in testing is that objects whose job > > it is to coordinate or manage other objects (like controllers, > > presenters, managers, etc) are always tested in isolation. > > Interaction-based testing is the key here. These objects can be > > considered branch objects. They connect to other branches or to leaf > > node objects. > > > > Leaf node objects are the end of the line and they do the actual work. > > Here is where I use state based testing. I consider ActiveRecord > > models leaf nodes. > > What about interactions between ActiveRecord objects. If a User > has_many Subscriptions, do you mock out those interactions?For me it depends. If I am testing my User object and it has a custom method called find_subscriptions_which_have_not_expired_but_which _has_not_been_read_in_over_n_days. I will not mock out any interactions with the subscriptions at that point. This is for two reasons. One, when I first get this to work I may do it in pure ruby code (no SQL help) just to get it working. At some later date/time this is going to move to SQL. I want my test to not have to change in order to do this. If I was interaction-based testing this custom find method then it wouldn''t really help me ensure I didn''t break something. Secondly, I view my model has a leaf node object. Most of everything my model does I want to state based test the thing to ensure the results are what I want (and not the interactions). Some times I find there is a method where I will mock an association because I truly don''t care about the result, and I really only care about the interaction. For example if User delegates something to the Subscription class either via a delegate declaration or a simple method which delegates. For example: delegate :zip_code, :to => :address OR def zip_code address.zip_code end> Would you > still mock them out if User and Subscription were PROs (plain Ruby > objects) and persistence were handled separately?Possibly. I think it depends on how the objects used each other, what kind of mini-frameworks or modules were in place to give functionality, etc. Since models get most of their functionality through inheritance of ActiveRecord::Base it would be difficult to compare w/o knowing how my PROs were hooked up. Composition or inheritance makes a difference in my head right now. Do you have any specific concrete examples in mind?> > > > > When you use > > > interaction-based tests, do you always leave those in place, or do you > > > substitute real objects as you implement them (and if so, do your > > > changes move to a more state-based style)? > > > > Leave the mocked out collaborators in place. An interaction based test > > verifies that the correct interaction takes place. As soon as you > > remove the mock and substitute it with a real object your test has > > become compromised. It''s no longer verifying the correct interaction > > occurs, it now only makes sure your test doesn''t die with a real > > object. > > This leads to perhaps a more subtle case of my previous > question...ActiveRecord relies pretty heavily on the real classes of > objects. To me, this means that it would make more sense to use mocks > if you didn''t use AR, but use the real objects when you are using AR. > Again, this is only between model objects. I agree that controller > specs should mock them all out.I agree with this. This is largely how I work now as described above.> > > > > I realize that''s a ton of questions...I''d be very grateful (and > > > impressed!) if you took the time to answer any of them. Also I''d love > > > to hear input from other people as well. > > > > > > > It''s too bad we can''t just stand at a whiteboard and talk this out. > > The answers to these questions could fill a book and email hardly does > > it justice to provide clear, coherent and complete answers. Not that > > my response are "answers" to your questions, but it''s how I think > > about testing and TDD. > > Thanks again for your thoughtful reply. Looking forward to hearing a > little bit more. >ditto, -- Zach Dennis http://www.continuousthinking.com
Hi all - I''ve been keeping an eye on this thread and I''ve just been too busy with holiday travel and book writing to participate as I would like. I''m just going to lay out some thoughts in one fell swoop rather than going back through and finding all the quotes. Hope that works for you. First - "mock roles, not objects" - that comes from a paper of the same name written by Steve Freeman, Nat Pryce, Tim Mackinnon, Joe Walnes who I believe were all working for ThoughtWorks, London in 2004. They describe using mocks as part of a process to stay focused on one object at a time and let mock objects help you to discover the interfaces of the current object''s collaborators. My read is that they do not make a distinction between domain objects and service objects, though they do make a distinction between "your" objects (which you should mock) and "everyone else''s" (which you should not). My own approach is largely derived from this document, and I''d recommend that everyone participating in this thread give it a read: http://www.jmock.org/oopsla2004.pdf. I think one place that we tend to get stuck on, and this is true of TDD in general, not just mocks, is that mocks need not be a permanent part of any example. Before I encountered Rails it was common for me to use mocks in a test and then replace them with the mocked object later. This decision would depend on many factors, and I can''t say that I sought to eliminate mocks when I could, but there were times when it just made more sense to use a real object once it came to be. Rails is a different beast because we don''t really have a sense of 3 layers with lots of little objects in each. Instead we have what amount to 3 giant objects with lots of behavior in each and even shared state across layers. For me, this rationalizes isolating things with mocks and stubs (which is counter to the recommendation in the oopsla paper referenced above). Because the framework itself provides virtually no isolation, the spec suite must if you want isolation. Zach''s idea of branch nodes and leaf nodes really speaks to me. I don''t remember where I read this, but I long ago learned that an ideal OO operation consists of a chain of messages over any number of objects, culminating at a boundary object (what Zach is calling a leaf node). It should also favor commands over queries (Tell Don''t Ask), so while all of the getters we get for free on our AR model objects is convenient, from an OO perspective it''s a giant encapsulation-sieve (again, more reason to isolate things w/ stubs/mocks in tests). You might find http://www.holub.com/publications/notes_and_slides/Everything.You.Know.is.Wrong.pdf in regards to this. In this paper, Holub suggests that getters are evil and we should use importers and exporters instead of exposing getters/setters. If we were to re-engineer rails to satisfy this, instead of this in a controller: def index @model = Model.find(params[:id]) render :template => "some_template" end you might see something more like this: def index Model.export(params[:id]).to(some_template) end Here Model would still do a query, but it becomes internal to the Model (class) object. Then it passes some_template to the model and says "export yourself", at which point the model starts calling methods like name=self.name. The fact that the recipient of the export is a view is unknown to the model, so there is no conceptual binding between model and view. Ah, just think of how easy THAT would be to mock, and when things are easy to mock, it means that it is easy to swap out components in the chain of events, thus using run-time conditions to alter the path of a given operation through different objects. There is much, much more to say, but this is all I have time to contribute right now. Cheers, and Happy New Year to all! David
I don''t know if anyone else will find this thought useful, but: I think different programmers have different situations, and they often force different sorts of priorities. I feel like a lot of the talk about mocking -- particularly as it hedges into discussions of modeling, design as part of the spec-writing process, LoD, etc -- implicitly assumes you want to spend a certain percentage of your work-week delineating a sensible class design for your application, and embedding those design ideas into your specs. At the risk of sounding like a cowboy coder I''d like to suggest that some situations actually call for more tolerance of chaos than others. I can think of a few forces that might imply this: - Team size. A bigger team means the code''s design has to be more explicit, because of the limits of implicity knowledge team members can get from one another through everyday conversation, etc. - How quickly the business needs change. Designs for medical imaging software are likely to change less quickly than those of a consumer- facing website, which means you might have more or less time to tease out the forces that would lead you to an optimal design. In my case: I work in an small team (4 Rails programmers) making consumer-facing websites, so the team is small and the business needs can turn on a dime. From having been in such an environment for years, I feel like I''ve learned to write code that is just chaotic enough and yet still works. When I say "just chaotic enough", I mean not prematurely modeling problems I don''t have the time to fully understand, but still giving the code enough structure and tests that 1) stupid bugs don''t happen and 2) I can easily restructure the code when the time seems right. In such environment, mocking simply gets in my way. If I''m writing, say, a complex sequence of steps involving the posting of a form, various validations, an email getting sent, a link getting clicked, and changes being made in the database, I really don''t want to also have to write a series of mocks delineating every underlying call those various controllers are making. At the time I''m writing the spec, I simply don''t understand the problem well enough to write good lines about what should be mocked where. In a matter of hours or days I''ll probably end up rewriting all of that stuff, and I''d rather not have it in my way. We talk about production code having a maintenance cost: Spec code has a maintenance cost as well. If I can get the same level of logical testing with specs and half the code, by leaving out mocking definitions, then that''s what I''m going to do. As an analogy: I live in New York, and I''ve learned to have semi- compulsive cleaning habits from living in such small places. When you have a tiny room, you notice clutter much more. Then, a few years ago, I moved to a much bigger apartment (though "much bigger" is relative to NYC, of course). At first, I was cleaning just as much, but then I realized that I simply didn''t need to. Now sometimes I just leave clutter around, on my bedside table or my kitchen counter. I don''t need to spend all my life neatening up. And if I do lose something, I may not find it instantly, but I can spend a little while and look for it. It''s got to be somewhere in my apartment, and the whole thing''s not even that big. Francis Hwang http://fhwang.net/
On Dec 29, 2007 5:46 PM, Francis Hwang <sera at fhwang.net> wrote:> I don''t know if anyone else will find this thought useful, but: > > > I think different programmers have different situations, and they > often force different sorts of priorities. I feel like a lot of the > talk about mocking -- particularly as it hedges into discussions of > modeling, design as part of the spec-writing process, LoD, etc -- > implicitly assumes you want to spend a certain percentage of your > work-week delineating a sensible class design for your application, > and embedding those design ideas into your specs.The fact is that you are going to spend time on designing, testing and implementing anyways. It is a natural part of software development. You cannot develop software without doing these things. The challenge is to do it in a way that better supports the initial development of a project as well as maintenance and continued development.> > At the risk of > sounding like a cowboy coder I''d like to suggest that some situations > actually call for more tolerance of chaos than others. > > > I can think of a few forces that might imply this: > > - Team size. A bigger team means the code''s design has to be more > explicit, because of the limits of implicity knowledge team members > can get from one another through everyday conversation, etc.This argument doesn''t pan out. First, it''s highly unlikely that the same developers are on a project for the full lifetime of the project. Second, this fails to account for the negative impact of bad code and design. The negative impact includes the time it takes to understand the bad design, find/fix obscure bugs and to extend with new features or changing to existing ones.> - How quickly the business needs change. Designs for medical imaging > software are likely to change less quickly than those of a consumer- > facing website, which means you might have more or less time to tease > out the forces that would lead you to an optimal design.This doesn''t pan out either. Business needs also change at infrequent intervals. Company mergers, new or updated policies, new or updated laws, the new CEO wanting something, etc are things that don''t happen every day, but when they do happen it can have a big impact. The goal of good program design isn''t to add unnecessary complexity which accounts for these. The goal of good program design is to develop a system that is simple, coherent and able to change to support the initial development of a project as well as maintenance and continued development. The ability to "change" is relative -- every program design can be changed. There are certain practices and disciplines that can allow for easier change though, change that reinforces the goal of good program design. The Law of Demeter is one of them. Simple objects with a single responsibility is another which reinforces the separation of concerns concept. Testing is another. The concept of an "optimal" design implies there is one magical design that will solve all potential issues. This puts people in the "design, then build" mindset -- the idea that if the design is perfect then all you have to do is build it. We know this is not correct.> In my case: I work in an small team (4 Rails programmers) making > consumer-facing websites, so the team is small and the business needs > can turn on a dime. From having been in such an environment for > years, I feel like I''ve learned to write code that is just chaotic > enough and yet still works. When I say "just chaotic enough", I mean > not prematurely modeling problems I don''t have the time to fully > understand, but still giving the code enough structure and tests that > 1) stupid bugs don''t happen and 2) I can easily restructure the code > when the time seems right.The challenge is to write code that is not chaotic, and to learn to do it in a way that allows the code to be more meaningful and that enhances your ability to develop software rather then hinder it.> In such environment, mocking simply gets in my way. If I''m writing, > say, a complex sequence of steps involving the posting of a form, > various validations, an email getting sent, a link getting clicked, > and changes being made in the database, I really don''t want to also > have to write a series of mocks delineating every underlying call > those various controllers are making. At the time I''m writing the > spec, I simply don''t understand the problem well enough to write good > lines about what should be mocked where. In a matter of hours or days > I''ll probably end up rewriting all of that stuff, and I''d rather not > have it in my way. We talk about production code having a maintenance > cost: Spec code has a maintenance cost as well. If I can get the same > level of logical testing with specs and half the code, by leaving out > mocking definitions, then that''s what I''m going to do.I think we should make a distinction. In my head when you need to write code and explore so you can understand what is needed in order to solve a problem I call that a "spike". I don''t test spikes. They are an exploratory task which help me understand what I need to do. When I understand what I need to do I test drive my development. Now different rules apply for when you use mocks. In previous posts in this thread I pointed out that I tend to use a branch/leaf node object guideline to determine where I use mocks and when I don''t.> As an analogy: I live in New York, and I''ve learned to have semi- > compulsive cleaning habits from living in such small places. When you > have a tiny room, you notice clutter much more. Then, a few years > ago, I moved to a much bigger apartment (though "much bigger" is > relative to NYC, of course). At first, I was cleaning just as much, > but then I realized that I simply didn''t need to. Now sometimes I > just leave clutter around, on my bedside table or my kitchen counter. > I don''t need to spend all my life neatening up. And if I do lose > something, I may not find it instantly, but I can spend a little > while and look for it. It''s got to be somewhere in my apartment, and > the whole thing''s not even that big.Two things about this bothers me. One, this implies that from the get-go it is ok to leave crap around an application code base. Two, this builds on the concept of a "optimal" design; by way of spending your life neatening up. I am going to rewrite your analogy in a way that changes the meaning as I read it, but hopefully conveys what you wanted to get across: " I do not want to spend the life of a project refactoring a code base to perfection for the sake of idealogical views on what code should be. I want to develop a running program for my customer. And where I find the ideals clashing with that goal I will abandon the ideals. Knowing this, parts of my application may be clutter or imperfect, but I am ok with this and so is my customer -- he/she has a running application. " If this is what you meant then I agree with you. The question is, are there things you can learn or discover which better support the goal of developing software for your customer, for the initial launch, as well as maintenance and ongoing development. If so, what are the ones that can be learned and how-to they apply? And for the times you discover be sure to share with the rest of us. =) Finally IMO mocking and interaction-based testing has a place in software development and when used properly it adds value to the software development process. -- Zach Dennis http://www.continuousthinking.com
On 12/29/2007 5:46 PM, Francis Hwang wrote:> - How quickly the business needs change. Designs for medical imaging > software are likely to change less quickly than those of a consumer- > facing website, which means you might have more or less time to tease > out the forces that would lead you to an optimal design.A few weeks ago, I ran across the following comment, explaining away 200 lines of copied-and-pasted internal structures in lieu of encapsulation, in what was once the world''s largest consumer-facing web site: /* Yes, normally this would be */ /* incredibly dangerous ? but MainLoop is */ /* very unlikely to change now (spring ''00) */ Careful about those assumptions. Jay Levitt
On Dec 30, 2007, at 1:42 AM, Zach Dennis wrote:> On Dec 29, 2007 5:46 PM, Francis Hwang <sera at fhwang.net> wrote: >> I don''t know if anyone else will find this thought useful, but: >> >> >> I think different programmers have different situations, and they >> often force different sorts of priorities. I feel like a lot of the >> talk about mocking -- particularly as it hedges into discussions of >> modeling, design as part of the spec-writing process, LoD, etc -- >> implicitly assumes you want to spend a certain percentage of your >> work-week delineating a sensible class design for your application, >> and embedding those design ideas into your specs. > > The fact is that you are going to spend time on designing, testing and > implementing anyways. It is a natural part of software > development. You cannot develop software without doing these > things. The challenge is to do it in a way that better supports the > initial development of a project as well as maintenance and continued > development.I certainly didn''t mean to imply that you shouldn''t do any design or testing. If I had to guess at my coding style versus the average RSpec user, based on what''s been said in this thread, I''d guess that I do about as much writing of tests/specs, and probably spend less time designing. But there is certainly such a thing as overdesigning, as well, right? I''m always trying to find the right amount, and I suspect that "the right amount" can vary somewhat in context.> >> >> At the risk of >> sounding like a cowboy coder I''d like to suggest that some situations >> actually call for more tolerance of chaos than others. >> >> >> I can think of a few forces that might imply this: >> >> - Team size. A bigger team means the code''s design has to be more >> explicit, because of the limits of implicity knowledge team members >> can get from one another through everyday conversation, etc. > > This argument doesn''t pan out. First, it''s highly unlikely that > the same developers are on a project for the full lifetime of the > project. Second, this fails to account for the negative impact of bad > code and design. The negative impact includes the time it takes to > understand the bad design, find/fix obscure bugs and to extend with > new features or changing to existing ones.Again, I did not say "if you have a small team you don''t have to do any design at all." I said that perhaps if you have a much smaller team you can spend a little less time on design, because implicit knowledge is much more effectively communicated. Are you disagreeing with this point? Are you saying that two software projects, one with four developers and the other with forty, will ideally spend the exact same percentage of time thinking about modeling, designing, etc.?>> - How quickly the business needs change. Designs for medical imaging >> software are likely to change less quickly than those of a consumer- >> facing website, which means you might have more or less time to tease >> out the forces that would lead you to an optimal design. > > This doesn''t pan out either. Business needs also change at infrequent > intervals. Company mergers, new or updated policies, new or updated > laws, the new CEO wanting something, etc are things that don''t happen > every day, but when they do happen it can have a big impact. The goal > of good program design isn''t to add unnecessary complexity which > accounts for these.I wasn''t saying that some businesses needs never change. The point I was trying to make is in that some sorts of businesses and companies, change happens more often, and can be expected to happen more often based on past experience.>> In my case: I work in an small team (4 Rails programmers) making >> consumer-facing websites, so the team is small and the business needs >> can turn on a dime. From having been in such an environment for >> years, I feel like I''ve learned to write code that is just chaotic >> enough and yet still works. When I say "just chaotic enough", I mean >> not prematurely modeling problems I don''t have the time to fully >> understand, but still giving the code enough structure and tests that >> 1) stupid bugs don''t happen and 2) I can easily restructure the code >> when the time seems right. > > The challenge is to write code that is not chaotic, and to learn to do > it in a way that allows the code to be more meaningful and that > enhances > your ability to develop software rather then hinder it.I wonder if part of the disconnect here depends on terminology. Some might see "chaos" as a negative term; I don''t. There are plenty of highly chaotic, functional systems, both man-made and natural. Ecosystems, for example, are chaotic: They have an order that is implicit through the collective actions of all their agents. But that order is difficult to understand, since it''s not really written down. I guess that''s what I''m trying to express when applying the word "chaos" to code: It functions for now, but perhaps the way it works isn''t as expressive as it could be for a newcomer coming to the code. Another thing I''d express is that I find a codebase to assymetrical, in terms of how much specification each individual piece needs. I find it surprising, for example, when people want to test their Rails views in isolation. I write plenty of tests when I''m working, but I try to have a sense of which pieces of code require a more full treatment. I''ll extensively test code when the cost/benefit ratio makes sense to me, trying to think about factors such as: - how hard is it to write the test? - how hard is the code, and how many varied edge cases are there that I should write down? - are there unusual cases that I can think of now, that should be embodied in a test?>> In such environment, mocking simply gets in my way. If I''m writing, >> say, a complex sequence of steps involving the posting of a form, >> various validations, an email getting sent, a link getting clicked, >> and changes being made in the database, I really don''t want to also >> have to write a series of mocks delineating every underlying call >> those various controllers are making. At the time I''m writing the >> spec, I simply don''t understand the problem well enough to write good >> lines about what should be mocked where. In a matter of hours or days >> I''ll probably end up rewriting all of that stuff, and I''d rather not >> have it in my way. We talk about production code having a maintenance >> cost: Spec code has a maintenance cost as well. If I can get the same >> level of logical testing with specs and half the code, by leaving out >> mocking definitions, then that''s what I''m going to do. > > I think we should make a distinction. In my head when you need to > write code > and explore so you can understand what is needed in order to solve > a problem I > call that a "spike". > > I don''t test spikes. They are an exploratory task which help me > understand what I need to do. When I understand what I need to do I > test drive my development. Now different rules apply for when you use > mocks. In previous posts in this thread I pointed out that I tend to > use a branch/leaf node object guideline to determine where I use mocks > and when I don''t.My understanding of a spike is to write code that explores a problem that you aren''t certain is solvable at all, given a certain set of constraints. That''s not the lack of understanding I''m talking about: I''m more addressing code that I know is easily writeable, but there are a number of issues regarding application design that I haven''t worked out yet. I''d rather write a test that encapsulates only the external touchpoints -- submit a form, receive an email, click on the link in the email -- and leave any deeper design decisions to a few minutes later, when I actually begin implementing that interaction. There''s another kind of "not understanding" that''s also relevant here: A "not understanding" due to the fact that you don''t have all the relevant information, and you can''t get it all now. For example: You release the very first iteration of a website feature on Monday, knowing full well that the feature''s not completed. But the reason you release it is because on Wednesday you want to collect user data regarding this feature, which will help you and the company make business decisions about where the feature should go next.> >> As an analogy: I live in New York, and I''ve learned to have semi- >> compulsive cleaning habits from living in such small places. When you >> have a tiny room, you notice clutter much more. Then, a few years >> ago, I moved to a much bigger apartment (though "much bigger" is >> relative to NYC, of course). At first, I was cleaning just as much, >> but then I realized that I simply didn''t need to. Now sometimes I >> just leave clutter around, on my bedside table or my kitchen counter. >> I don''t need to spend all my life neatening up. And if I do lose >> something, I may not find it instantly, but I can spend a little >> while and look for it. It''s got to be somewhere in my apartment, and >> the whole thing''s not even that big. > > Two things about this bothers me. One, this implies that from the > get-go it is ok to leave crap around an application code base.Well, not to belabor the analogy, but: It''s not "crap". If it''s in my apartment, I own it for a reason. I may not use it all the time, it may not be the most important thing in my life, but apparently I need it once in a while or else I''d throw it away. I may not spend all my time trying to find the optimal place to put it, but that doesn''t mean I don''t value it. I just might value it less than other things in my apartment.> Two, > this builds on the concept of a "optimal" design; by way of > spending your life neatening up. > > I am going to rewrite your analogy in a way that changes the meaning > as I read it, but hopefully conveys what you wanted to get across: > " > I do not want to spend the life of a project refactoring a code base > to perfection for the sake of idealogical views on what code should > be. I want to develop a running > program for my customer. And where I find the ideals clashing with > that goal I will abandon the ideals. Knowing this, parts of my > application may be clutter or imperfect, but I am ok with this and so > is my customer -- he/she has a running application. > "That''s probably close to what I''m trying to say. But in a broader, philosophical sense, I''m okay with the fact that my code is never going to be perfect. Not at this job, not at any other job. In fact I don''t know if I''ve ever met anybody who gets to write perfect code. We write code in the real world, and the real world''s far from perfect. I suppose Wabi Sabi comes into play here. To bring it back to mocks: It seems to that mocks might play a role in your specs if you were highly focused on the design and interaction of classes in isolation from all other classes, but understanding that isolation involves having done a decent amount of design work -- though more in some cases than in others. But if you were living with code that was more chaotic/amorphous/what-have-you, prematurely embedded such design assumptions into your specs might do more harm than good. I do, incidentally, use mocks extensively in a lot of code, but only in highly focused cases where simulating state of an external resource (filesystem, external login service) seems extremely important. Of course, that usage of mocks is very different from what''s recommended as the default w/ RSpec. Francis Hwang http://fhwang.net/
On Dec 30, 2007, at 1:52 PM, Jay Levitt wrote:> On 12/29/2007 5:46 PM, Francis Hwang wrote: > >> - How quickly the business needs change. Designs for medical imaging >> software are likely to change less quickly than those of a consumer- >> facing website, which means you might have more or less time to tease >> out the forces that would lead you to an optimal design. > > A few weeks ago, I ran across the following comment, explaining > away 200 > lines of copied-and-pasted internal structures in lieu of > encapsulation, > in what was once the world''s largest consumer-facing web site: > > /* Yes, normally this would be */ > /* incredibly dangerous ? but MainLoop is */ > /* very unlikely to change now (spring ''00) */ > > Careful about those assumptions.Yeah, well, there''s a difference between chaotic code and foolish code. Regardless of what company I was working at, pretty much any code that would probably break a few years out would be a problem. Of course, you can''t predict with 100% accuracy which parts of the code are likely to change and which are likely to go untouched for years. I think you can make educated guesses, though. Incidentally, how well-tested was that code base? 200 lines of copy- and-paste smells like untested code to me. Francis Hwang http://fhwang.net/
On 12/30/2007 3:29 PM, Francis Hwang wrote:> On Dec 30, 2007, at 1:52 PM, Jay Levitt wrote: > >> On 12/29/2007 5:46 PM, Francis Hwang wrote: >> >>> - How quickly the business needs change. Designs for medical imaging >>> software are likely to change less quickly than those of a consumer- >>> facing website, which means you might have more or less time to tease >>> out the forces that would lead you to an optimal design. >> A few weeks ago, I ran across the following comment, explaining >> away 200 >> lines of copied-and-pasted internal structures in lieu of >> encapsulation, >> in what was once the world''s largest consumer-facing web site: >> >> /* Yes, normally this would be */ >> /* incredibly dangerous ? but MainLoop is */ >> /* very unlikely to change now (spring ''00) */ >> >> Careful about those assumptions. > > Yeah, well, there''s a difference between chaotic code and foolish > code. Regardless of what company I was working at, pretty much any > code that would probably break a few years out would be a problem. > > Of course, you can''t predict with 100% accuracy which parts of the > code are likely to change and which are likely to go untouched for > years. I think you can make educated guesses, though.You can, and we did... turns out they weren''t. (OK, I''m exaggerating. Most of them were, and the only guesses I''m finding now are the wrong guesses, by definition. And the world''s changed a lot, and we know more and can do more.)> Incidentally, how well-tested was that code base? 200 lines of copy- > and-paste smells like untested code to me.15-20 years ago, unit tests were not a widespread industry practice :) This code''s in a procedural language that really, really doesn''t do unit tests well. I''ve been trying, too. Almost wrote a pre-processor, till I thought about the maintenance nightmare that''d cause. Jay Levitt
On 12/30/2007 1:42 AM, Zach Dennis wrote:> I think we should make a distinction. In my head when you need to write code > and explore so you can understand what is needed in order to solve a problem I > call that a "spike".That''s great; I''ve been needing a term for exactly that and never saw this word used. > On Dec 29, 2007 5:46 PM, Francis Hwang <sera at fhwang.net> wrote:>> At first, I was cleaning just as much, >> but then I realized that I simply didn''t need to. Now sometimes I >> just leave clutter around, on my bedside table or my kitchen counter.I''m having the opposite problem. I moved from a huge house (that I specifically designed to always look clean) to a good-sized apartment, and discovered I''m a disgusting, unsanitary slob. (I have no idea how this relates to RSpec. I just wanted to share.) Jay Levitt
On Dec 30, 2007, at 9:38 PM, Jay Levitt wrote:>>> Incidentally, how well-tested was that code base? 200 lines of copy- >> and-paste smells like untested code to me. > > 15-20 years ago, unit tests were not a widespread industry practice :) > This code''s in a procedural language that really, really doesn''t do > unit tests well. I''ve been trying, too. Almost wrote a pre- > processor, > till I thought about the maintenance nightmare that''d cause.Right, that''s why I ask. I think working with languages, tools, and frameworks that are easier to test is a great advantage to how we all worked 10 or more years ago ... I suspect part of that luxury translates in being able to actually design _less_, since the cost of fixing our design mistakes in the future goes down significantly. Francis Hwang http://fhwang.net/
On Dec 29, 2007, at 5:46 PM, Francis Hwang wrote:> I don''t know if anyone else will find this thought useful, but: > > I think different programmers have different situations, and they > often force different sorts of priorities. I feel like a lot of the > talk about mocking -- particularly as it hedges into discussions of > modeling, design as part of the spec-writing process, LoD, etc -- > implicitly assumes you want to spend a certain percentage of your > work-week delineating a sensible class design for your application, > and embedding those design ideas into your specs. At the risk of > sounding like a cowboy coder I''d like to suggest that some situations > actually call for more tolerance of chaos than others. > > I can think of a few forces that might imply this: > > - Team size. A bigger team means the code''s design has to be more > explicit, because of the limits of implicity knowledge team members > can get from one another through everyday conversation, etc. > - How quickly the business needs change. Designs for medical imaging > software are likely to change less quickly than those of a consumer- > facing website, which means you might have more or less time to tease > out the forces that would lead you to an optimal design.+1 - This helps my thought out a lot. Thanks for the contributions, as always (this has been a great thread - from everyone involved). Scott
On Dec 30, 2007 10:09 PM, Francis Hwang <sera at fhwang.net> wrote:> On Dec 30, 2007, at 9:38 PM, Jay Levitt wrote: > > >>> Incidentally, how well-tested was that code base? 200 lines of copy- > >> and-paste smells like untested code to me. > > > > 15-20 years ago, unit tests were not a widespread industry practice :) > > This code''s in a procedural language that really, really doesn''t do > > unit tests well. I''ve been trying, too. Almost wrote a pre- > > processor, > > till I thought about the maintenance nightmare that''d cause. > > Right, that''s why I ask. I think working with languages, tools, and > frameworks that are easier to test is a great advantage to how we all > worked 10 or more years ago ... I suspect part of that luxury > translates in being able to actually design _less_, since the cost of > fixing our design mistakes in the future goes down significantly.I don''t think of it as designing less. (B/T)DD means designing incrementally. I read recently something where someone made a distinction between invention and discovery. Rather than sitting down ''ahead of time'' and inventing a design, you can discover the design as you go. The tests/specs become the design documentation themselves, and can evolve as requirements change or are refined as the process continues. -- Rick DeNatale My blog on Ruby http://talklikeaduck.denhaven2.com/
On Dec 31, 2007 12:53 PM, Rick DeNatale <rick.denatale at gmail.com> wrote:> On Dec 30, 2007 10:09 PM, Francis Hwang <sera at fhwang.net> wrote: > > On Dec 30, 2007, at 9:38 PM, Jay Levitt wrote: > > > > >>> Incidentally, how well-tested was that code base? 200 lines of copy- > > >> and-paste smells like untested code to me. > > > > > > 15-20 years ago, unit tests were not a widespread industry practice :) > > > This code''s in a procedural language that really, really doesn''t do > > > unit tests well. I''ve been trying, too. Almost wrote a pre- > > > processor, > > > till I thought about the maintenance nightmare that''d cause. > > > > Right, that''s why I ask. I think working with languages, tools, and > > frameworks that are easier to test is a great advantage to how we all > > worked 10 or more years ago ... I suspect part of that luxury > > translates in being able to actually design _less_, since the cost of > > fixing our design mistakes in the future goes down significantly. > > I don''t think of it as designing less. (B/T)DD means designing > incrementally. I read recently something where someone made a > distinction between invention and discovery. Rather than sitting down > ''ahead of time'' and inventing a design, you can discover the design as > you go.I don''t think it is "designing less" either. It''s designing better and doing it smarter, knowing that you''ll never fully comprehend the domain of your problem upfront, so you discover it, iteratively. As you discover more about the domain the design of your program changes (during refactoring) to support a domain model to which it is representing. This is a concept from Domain Driven Design. It Francis is referring to doing less upfront design to try to master it all from the outset, then I agree that less of that is better. But that is entirely different then just doing less design. -- Zach Dennis http://www.continuousthinking.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/rspec-users/attachments/20071231/bde7b0d9/attachment.html
On Dec 31, 2007, at 1:10 PM, Zach Dennis wrote:> I don''t think it is "designing less" either. It''s designing better > and doing it smarter, knowing that you''ll never fully comprehend > the domain of your problem upfront, so you discover it, > iteratively. As you discover more about the domain the design of > your program changes (during refactoring) to support a domain model > to which it is representing. > > This is a concept from Domain Driven Design. > > It Francis is referring to doing less upfront design to try to > master it all from the outset, then I agree that less of that is > better. But that is entirely different then just doing less design.I''m not certain how much we''re genuinely disagreeing here and how much we''re talking past each other -- I''m certainly feeling like I''m not communicating my amorphous ideas very well. One thing that seems fuzzy to me is the implied time frames here. Let me ask you this, Zach: Is it your aim that your released code always contains a set of classes whose interactions with other classes is well-structured and defined, through mocks, and other tools? And is it your belief that you can always seek to release code which embodies a precise understanding of the domain in question? I suppose part of what I''m saying is that sometimes, for non- programmer reasons, the domain itself is too fuzzy or too quickly shifting to try to nail down with a well-structured design. Sometimes you just release code that amorphously hints at a future design -- and in those cases, a strong test suite is what prevents you from shooting yourself in the foot. Francis Hwang http://fhwang.net/
On Jan 1, 2008 1:48 PM, Francis Hwang <sera at fhwang.net> wrote:> > On Dec 31, 2007, at 1:10 PM, Zach Dennis wrote: > > I don''t think it is "designing less" either. It''s designing better > > and doing it smarter, knowing that you''ll never fully comprehend > > the domain of your problem upfront, so you discover it, > > iteratively. As you discover more about the domain the design of > > your program changes (during refactoring) to support a domain model > > to which it is representing. > > > > This is a concept from Domain Driven Design. > > > > It Francis is referring to doing less upfront design to try to > > master it all from the outset, then I agree that less of that is > > better. But that is entirely different then just doing less design. > > I''m not certain how much we''re genuinely disagreeing here and how > much we''re talking past each other -- I''m certainly feeling like I''m > not communicating my amorphous ideas very well.I think I understand what you are trying to get across. That is why I''ve communicated back what I think you have said (or at least what I think you mean) in each of my responses, hopefully clarifying my position (in both agreement and disagreement) to something more specific, since phrases like "designing less" can be taken in several ways and carry several different meanings. I''d much rather discuss a particular aspect of "designing less" where it is advantageous or not.> One thing that seems fuzzy to me is the implied time frames here. Let > me ask you this, Zach: Is it your aim that your released code always > contains a set of classes whose interactions with other classes is > well-structured and defined, through mocks, and other tools?It is my aim that objects, their responsibilities and their interactions are purposefully structured and defined. For me, this is through the iterative process of adding features to the system in a TDD/BDD manner. Since I test drive feature implementation I have explicitly made a decision to create a new object, add a new responsibility to an existing object, or move responsibilities from one object to another. Mocks are simply a tool that I use to help me discover interfaces and objects. They also help me express the coordination and interaction between objects which fulfill an application requirement, and they provide the added benefit of testing objects in isolation without the unnecessary complexity of testing objects throughout the test suite (which I believe strict state based testing of everything does). I do not use any tool primarily for the sake of designing a well-structured class taxonomy or detailed system design. No all encompassing UML diagrams or oodles of documentation. I strive for the domain driven design principle of having the domain model in the code accurately reflect the domain you are solving a problem for.> And is > it your belief that you can always seek to release code which > embodies a precise understanding of the domain in question?I believe you can always seek to release code which is an accurate reflection of the domain you are solving a problem for. I do not believe that the code itself will be a precise understanding of the domain in question. The features you implement will be solving a specific problem in a given domain. What is required to implement those features are only part of the domain. Ideally, what is implemented is only "precise" enough to satisfy the feature requirement. A lot of parts of the domain will be missing. Initially, the understanding of the domain may be fuzzy at best. But you start with a single feature to implement and you begin modeling the domain within your codebase for only what is required to implement that feature. You may only end up with a couple of objects; hardly a "precise" or perfect understanding of the domain at large, but as more features are added and time passes you get a better grasp of the domain. As you implement more features you will most likely discover new objects and responsibilities. This may entail extracting behavior from existing objects onto a new object because it better reflects what is required in the domain or it may promote single responsibility or separation of concerns and the concept of simple objects. The whole process though is something that is learned, continually and incrementally. I don''t stop development of a feature because I am not an expert of the domain I am working in. I have to understand what is required to implement a particular feature or solve a particular problem anyways -- so I try to understand how the business expert or customer understands it and I try to keep that understanding consistent within the codebase. Specifically the domain objects in the application. I agree with a lot of the concepts behind domain driven design and how it works with agile (specifically XP in my case) development. It has created simpler code, easier to understand and maintain code and better test suites for apps that I''ve worked on.> I suppose part of what I''m saying is that sometimes, for non- > programmer reasons, the domain itself is too fuzzy or too quickly > shifting to try to nail down with a well-structured design. Sometimes > you just release code that amorphously hints at a future design -- > and in those cases, a strong test suite is what prevents you from > shooting yourself in the foot. >I agree that the domains in which we are implementing features and solving problems for are largely fuzzy. This is why it is so important to have business experts and real users apart of the development process. Sometimes it''s not always possible to have direct access when you need it or your customer is venturing into new territory and they are trying to figure things out as they go as well. This is why it''s so important to have a clean code base which reflects the domain because when that shifts we know what has to shift in our code. For me well-structured is a relative term. It shifts with each feature I add, remove or change in the system. After I complete a feature I want the codebase to be a coherent reflection of the features it includes. Over time the system is going to change and evolve, and parts and pieces are going to be added or thrown away. My implementation may not always be the best possible but it will be the best I could do at a given time. And it will always strive for simplicity over complexity and it will have a strong test suite. This is probably a longer reply then you were looking for.. hopefully thats alright. ;) -- Zach Dennis http://www.continuousthinking.com