<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
FONT-SIZE: 10pt;
FONT-FAMILY:Tahoma
}
</style>
</head>
<body class='hmmessage'>Gerikes,<BR>
<BR>
actually, several of the core devs are great fans of unit testing with nunit, myself included - our lack of tests is partly historical, and partly because it risks becoming a threshold for non-tes-savvy developers.<BR>
<BR>
This said, anyone is free to contribute good tests; but for the sake of discussion, I'd like to discuss a set of ground rules:<BR>
<BR>
#1) Tests should be unit tests, incorporate only a few classes and they should be narrow in scope (only test a few results)<BR>
#2) Tests should primarily test outcomes, not internal behaviour (ie, what is returned, not what is called or state changes)<BR>#3) Anyone should be allowed to throw away a test that they don't understand well enough to change.<BR>#4) Tests should be a pragmatic part of your daily coding, not a one-off policy thing.<BR>
<BR>The motivations for these points are;<BR>
<BR>
#1) The more classes involved, the more the test will hinder effortless refactoring, as when moving functions from one place to another. The tests themselves become 'couplers'<BR>
#2) Decoupled code should ideally keep minimal state; and setting up tests for testing behaviour and internal state tend to make them 'broad' and 'complex' (violates #1 and #3)<BR>
#3) Test should make you faster, not slower. If changing the code becomes difficult because of a test, the test should go, not the change. Either chuck the test, or narrow its scope.<BR>#4) Code up a testing workbench instead of step-debugging. I promise you you will gain. Debug by introducing tests based on your assumptions. If you see something that you think you know the outcome of, make a test for it to be sure.<BR>
<BR>
This is my truth, now give me yours.<BR>
/Stefan<BR>
<BR>
<HR id=stopSpelling>
<BR>
> Date: Sat, 15 Dec 2007 13:16:36 -0500<BR>> From: sean@dague.net<BR>> To: opensim-dev@lists.berlios.de<BR>> Subject: Re: [Opensim-dev] Automated Testing<BR>> <BR>> On Sat, Dec 15, 2007 at 12:53:36PM -0500, Gerikes wrote:<BR>> > Hi all.<BR>> > <BR>> > Just starting to look at this project, and seems interesting enough. I come<BR>> > from a short background creating internal corperate web sites using C# /<BR>> > Monorail, but have interests in virtual worlds, and am happy to see such a<BR>> > project being built on .NET.<BR>> > <BR>> > Anyway, I've been using automated unit and integration testing for almost a<BR>> > year now, and found it drastically increased the quality of my code. What is<BR>> > the position of the project in terms of how unit testing are implemented?<BR>> > Does anyone on the project write their own but just not commit them?<BR>> > <BR>> > I'd be hasty to try to come in here trying to change the world (so to<BR>> > speak), so perhaps I'll start smaller. I'm planning on trying to grok the<BR>> > code base by developing my own test harness that can be used for integration<BR>> > testing. Basically, it would be a region that could be started, and have<BR>> > tests run against it, probably in the form of sending commands through bots<BR>> > connected using libsecondlife. The API would allow for the tests to run<BR>> > multiple times, perhaps switching out things like which data store manager<BR>> > is used during the test so that a single test can be run against multiple<BR>> > implementations.<BR>> > <BR>> > I would try to allow the creation of tests to be simple and straightforward,<BR>> > such as...<BR>> > <BR>> > <BR>> > [Test]<BR>> > public void CanInstantMessage()<BR>> > {<BR>> > const string message = "Test Message";<BR>> > <BR>> > TestAvatar avatar1 = ActivateNewAvatar();<BR>> > TestAvatar avatar2 = ActivateNewAvatar();<BR>> > avatar1.Perform(Action.InstantMessage(avatar2, message));<BR>> > <BR>> > Assert.AreEqual(1, avatar2.RecievedMessages.From(avatar2).Count);<BR>> > Assert.AreEqual(message, avatar2.RecievedMessages.From<BR>> > (avatar2).First().Message);<BR>> > }<BR>> > <BR>> > So that's it, I'm interested in any feedback you have.<BR>> <BR>> ++++ on getting some automated testing into the tree. The lack of that<BR>> right now is mostly based on rapid turnover in the current code, and<BR>> lack of time, not lack of interest.<BR>> <BR>> Patches that start to integrate automated testing into the tree are<BR>> welcomed as long as they can be executed on both MS.NET and Mono using<BR>> nant (we're very strongly bi-platform).<BR>> <BR>> My suggestion is to start small on one area of the code and get that<BR>> well tested, then extend througout. I think you'll get a lot of support<BR>> from everyone in the dev team if you jump in and start contributing<BR>> here.<BR>> <BR>> More work on test engineering is one of my new years resolutions.<BR>> <BR>> -Sean<BR>> <BR>> -- <BR>> __________________________________________________________________<BR>> <BR>> Sean Dague Mid-Hudson Valley<BR>> sean at dague dot net Linux Users Group<BR>> http://dague.net http://mhvlug.org<BR>> <BR>> There is no silver bullet. Plus, werewolves make better neighbors<BR>> than zombies, and they tend to keep the vampire population down.<BR>> __________________________________________________________________<BR><BR></body>
</html>