[Opensim-dev] Automated Testing

Stefan Andersson stefan at tribalmedia.se
Thu Dec 20 13:16:05 UTC 2007


> As well as some of the points below (most of which I very much agree > with), I would like to say that if we are to introduce unit tests, we > should also seriously consider introducing continuous integration (as > Gerikes is probably talking about below), where a machine rebuilds on > every code change, runs the tests, and then publishes the results to the > website. If any test fails we'll know at the point of breakage. If we > have a culture of fixing broken tests immediately, this should enhance > stability.
Yes, we actually had CI up and running on our old server, but when we moved I guess we never took it up.
 
+1 on CI (duh) it's a great tool and minimizes the 'shit-nobody-ran-tests-for-a-week-and-now-everything's-red-and-I-don't-have-the-time-to-fix' factor.
 
> > #3) Anyone should be allowed to throw away a test that they don't > > understand well enough to change.> To me, this sounds like an excuse to ignore testing code. If the test > works and passes (and doesn't test too much) then it should be > understandable. Perhaps you're right, but I think the bar should be set > high for this action.
 
It's the 'and doesn't test too much' that tend to be the problem. But yeah, chucking a test isn't something that should be done on a whim. What I mean is that bad tests should be weeded out; tests should evolve just as much as the code it's testing. In most cases, you'd probably just narrow the test down, not throw it away completely.
 
I have spend too much time pulling my hair over overly complicated tests that don't explain anything about what's gone wrong if they're red to put up with that.
 
Also, we can never set testing as a policy in this project, so it has to be something people do out of their own needs.
> > #1) The more classes involved, the more the test will hinder > > effortless refactoring, as when moving functions from one place > > to another. The tests themselves become 'couplers'> True. IDEs like Java ameliorate this (through their extensive > refactoring support), but this becomes more difficult without such > support - perhaps the non-free editions of Visual Studio have good > support but the free one doesn't appear to, and there's not all that > much in SharpDevelop or MonoDevelop). Hopefully with loose coupling > this won't be quite such a problem.
 
Well, both me and MW are on VS200x with ReSharper, so we're VERY automated, which makes us rather itchy about things getting in the way for smooth refactorings.
> > #3) Test should make you faster, not slower. If changing the code > > becomes difficult because of a test, the test should go, not the > > change. Either chuck the test, or narrow its scope.> I think that tests, with continuous integration, would also enhance > stability. I've already seen a number of instances where working > OpenSim functionality has been broken by subsequent changes.
 
Definitively; the regression bit is a sad affair in this project. Almost every major contribution breaks something else somewhere, which will make it VERY cumbersome to use this platform for commercial purposes. We will all have to do our own test-cycles before putting new code into production.
 
That will be a very inefficient situation.
/Stefan
 
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://opensimulator.org/pipermail/opensim-dev/attachments/20071220/df5489ec/attachment-0001.html>


More information about the Opensim-dev mailing list