[Opensim-dev] Automated Testing

Justin Clark-Casey jjustincc at googlemail.com
Thu Dec 20 12:30:38 UTC 2007


Stefan Andersson wrote:
> Gerikes,
>  
> actually, several of the core devs are great fans of unit testing with 
> nunit, myself included - our lack of tests is partly historical, and 
> partly because it risks becoming a threshold for non-tes-savvy developers.
I also like unit testing where appropriate.  Personally, I would say 
that it's worth taking the time to write unit tests where stability is 
important (core grid services, inventory, assets, etc) but can hold 
things up in areas where the code is still immature or undergoing rapid 
change.

As well as some of the points below (most of which I very much agree 
with), I would like to say that if we are to introduce unit tests, we 
should also seriously consider introducing continuous integration (as 
Gerikes is probably talking about below), where a machine rebuilds on 
every code change, runs the tests, and then publishes the results to the 
website.  If any test fails we'll know at the point of breakage.  If we 
have a culture of fixing broken tests immediately, this should enhance 
stability.
>  
> This said, anyone is free to contribute good tests; but for the sake 
> of discussion, I'd like to discuss a set of ground rules:
>  
> #1) Tests should be unit tests, incorporate only a few classes and 
> they should be narrow in scope (only test a few results)
Definitely.
> #2) Tests should primarily test outcomes, not internal behaviour (ie, 
> what is returned, not what is called or state changes)
+1
> #3) Anyone should be allowed to throw away a test that they don't 
> understand well enough to change.
To me, this sounds like an excuse to ignore testing code.  If the test 
works and passes (and doesn't test too much) then it should be 
understandable.  Perhaps you're right, but I think the bar should be set 
high for this action.
> #4) Tests should be a pragmatic part of your daily coding, not a 
> one-off policy thing.
>
+1
> The motivations for these points are;
>  
> #1) The more classes involved, the more the test will hinder 
> effortless refactoring, as when moving functions from one place 
> to another. The tests themselves become 'couplers'
True.  IDEs like Java ameliorate this (through their extensive 
refactoring support), but this becomes more difficult without such 
support - perhaps the non-free editions of Visual Studio have good 
support but the free one doesn't appear to, and there's not all that 
much in SharpDevelop or MonoDevelop).  Hopefully with loose coupling 
this won't be quite such a problem.
> #2) Decoupled code should ideally keep minimal state; and setting up 
> tests for testing behaviour and internal state tend to make them 
> 'broad' and 'complex' (violates #1 and #3)
+1
> #3) Test should make you faster, not slower. If changing the code 
> becomes difficult because of a test, the test should go, not the 
> change. Either chuck the test, or narrow its scope.
I think that tests, with continuous integration, would also enhance 
stability.  I've already seen a number of instances where working 
OpenSim functionality has been broken by subsequent changes.
> #4) Code up a testing workbench instead of step-debugging. I promise 
> you you will gain. Debug by introducing tests based on your 
> assumptions. If you see something that you think you know the outcome 
> of, make a test for it to be sure.
>  
> This is my truth, now give me yours.
> /Stefan
--
justincc
>  
> ------------------------------------------------------------------------
>
> > Date: Sat, 15 Dec 2007 13:16:36 -0500
> > From: sean at dague.net
> > To: opensim-dev at lists.berlios.de
> > Subject: Re: [Opensim-dev] Automated Testing
> >
> > On Sat, Dec 15, 2007 at 12:53:36PM -0500, Gerikes wrote:
> > > Hi all.
> > >
> > > Just starting to look at this project, and seems interesting 
> enough. I come
> > > from a short background creating internal corperate web sites 
> using C# /
> > > Monorail, but have interests in virtual worlds, and am happy to 
> see such a
> > > project being built on .NET.
> > >
> > > Anyway, I've been using automated unit and integration testing for 
> almost a
> > > year now, and found it drastically increased the quality of my 
> code. What is
> > > the position of the project in terms of how unit testing are 
> implemented?
> > > Does anyone on the project write their own but just not commit them?
> > >
> > > I'd be hasty to try to come in here trying to change the world (so to
> > > speak), so perhaps I'll start smaller. I'm planning on trying to 
> grok the
> > > code base by developing my own test harness that can be used for 
> integration
> > > testing. Basically, it would be a region that could be started, 
> and have
> > > tests run against it, probably in the form of sending commands 
> through bots
> > > connected using libsecondlife. The API would allow for the tests 
> to run
> > > multiple times, perhaps switching out things like which data store 
> manager
> > > is used during the test so that a single test can be run against 
> multiple
> > > implementations.
> > >
> > > I would try to allow the creation of tests to be simple and 
> straightforward,
> > > such as...
> > >
> > >
> > > [Test]
> > > public void CanInstantMessage()
> > > {
> > > const string message = "Test Message";
> > >
> > > TestAvatar avatar1 = ActivateNewAvatar();
> > > TestAvatar avatar2 = ActivateNewAvatar();
> > > avatar1.Perform(Action.InstantMessage(avatar2, message));
> > >
> > > Assert.AreEqual(1, avatar2.RecievedMessages.From(avatar2).Count);
> > > Assert.AreEqual(message, avatar2.RecievedMessages.From
> > > (avatar2).First().Message);
> > > }
> > >
> > > So that's it, I'm interested in any feedback you have.
> >
> > ++++ on getting some automated testing into the tree. The lack of that
> > right now is mostly based on rapid turnover in the current code, and
> > lack of time, not lack of interest.
> >
> > Patches that start to integrate automated testing into the tree are
> > welcomed as long as they can be executed on both MS.NET and Mono using
> > nant (we're very strongly bi-platform).
> >
> > My suggestion is to start small on one area of the code and get that
> > well tested, then extend througout. I think you'll get a lot of support
> > from everyone in the dev team if you jump in and start contributing
> > here.
> >
> > More work on test engineering is one of my new years resolutions.
> >
> > -Sean
> >
> > --
> > __________________________________________________________________
> >
> > Sean Dague Mid-Hudson Valley
> > sean at dague dot net Linux Users Group
> > http://dague.net http://mhvlug.org
> >
> > There is no silver bullet. Plus, werewolves make better neighbors
> > than zombies, and they tend to keep the vampire population down.
> > __________________________________________________________________
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Opensim-dev mailing list
> Opensim-dev at lists.berlios.de
> https://lists.berlios.de/mailman/listinfo/opensim-dev
>   




More information about the Opensim-dev mailing list