Automated Testing

From OpenSimulator

(Difference between revisions)
Jump to: navigation, search
m
Line 8: Line 8:
 
# Tests should be able to run safely in a production environment.  That means that care must be taken not to damage data on the machine that it is being run.
 
# Tests should be able to run safely in a production environment.  That means that care must be taken not to damage data on the machine that it is being run.
 
# Tests should be deterministic in other words repeatable. Avoid randomness in tests. See good and bad testing practices below.
 
# Tests should be deterministic in other words repeatable. Avoid randomness in tests. See good and bad testing practices below.
 +
 +
== Core Functionality Missing Unit Tests ==
 +
 +
This is a list of core functionality which is not covered by unit tests;
 +
 +
# Database Modules (These are mysql tables)
 +
## prim shapes
 +
## region ban
 +
## land
 +
## landaccesslist
 +
## estate_groups
 +
## estate_managers
 +
## estate_map
 +
## estate_settings
 +
## estate_users
 +
## estateban
 +
## avatarattachments
 +
 +
== Writing Tests ==
 +
Writing a new unit test is pretty easy, and very helpful in increasing the stability of opensim by nailing down bugs.  I'm going to present an example here of SQLite Asset testing to show how simple such a test case is to write.  The actual in tree SQLite Asset tests are a little different because the code was factored out so that it was easily applied to any database driver, so don't be concerned with the fact that what you see here isn't in the tree.
  
 
Clarification: Make sure your master project (not the test project) has an entry for files like the following:
 
Clarification: Make sure your master project (not the test project) has an entry for files like the following:
 
        
 
        
 +
<pre>
 
       <Files>
 
       <Files>
 
         <Match pattern="*.cs" recurse="true">
 
         <Match pattern="*.cs" recurse="true">
Line 16: Line 37:
 
         </Match>
 
         </Match>
 
       </Files>
 
       </Files>
 +
</pre>
  
 
+
=== NUnit Conventions ===
 
+
= Writing New Tests =
+
Writing a new unit test is pretty easy, and very helpful in increasing the stability of opensim by nailing down bugs.  I'm going to present an example here of SQLite Asset testing to show how simple such a test case is to write.  The actual in tree SQLite Asset tests are a little different because the code was factored out so that it was easily applied to any database driver, so don't be concerned with the fact that what you see here isn't in the tree.
+
 
+
== NUnit Conventions ==
+
 
An NUnit test suite:
 
An NUnit test suite:
 
* is a class with a default constructor (takes no arguments)
 
* is a class with a default constructor (takes no arguments)
Line 37: Line 54:
 
The run order is important if you want to have early tests that setup some complicated state (like creating objects), and have later tests remove or update that state.  For that reason I find it very helpful to name all test methods '''Txxx_somename''' where '''xxx''' is a number between 000 and 999.  That guaruntees no surprises in run order.
 
The run order is important if you want to have early tests that setup some complicated state (like creating objects), and have later tests remove or update that state.  For that reason I find it very helpful to name all test methods '''Txxx_somename''' where '''xxx''' is a number between 000 and 999.  That guaruntees no surprises in run order.
  
== An Example Test - SQLite Assets ==
+
=== An Example Test - SQLite Assets ===
 
<source lang="csharp">
 
<source lang="csharp">
 
using System;
 
using System;
Line 143: Line 160:
 
* Test - this method is a test
 
* Test - this method is a test
  
=== Setup / Teardown ===
+
==== Setup / Teardown ====
 
In the case of testing something like the database layer, we have to actually attempt to store / retrieve things from a database.  Following from rule #4 of good tests, we want to make sure not to touch the production databases to run our tests, so during startup we generate a temporary file name which is guaranteed not to be an existing file on the system, and use that as our database file name.  By running db.Initialize() the OpenSim migration code will correctly populate that database with the latest schema.
 
In the case of testing something like the database layer, we have to actually attempt to store / retrieve things from a database.  Following from rule #4 of good tests, we want to make sure not to touch the production databases to run our tests, so during startup we generate a temporary file name which is guaranteed not to be an existing file on the system, and use that as our database file name.  By running db.Initialize() the OpenSim migration code will correctly populate that database with the latest schema.
  
Line 150: Line 167:
 
During setup we also create a set of state variables, such as 3 uuids, 3 strings, and a data block.  You could have always just stuck these inline, but variables are there for a reason, so use them.
 
During setup we also create a set of state variables, such as 3 uuids, 3 strings, and a data block.  You could have always just stuck these inline, but variables are there for a reason, so use them.
  
=== Asserts ===
+
==== Asserts ====
 
You will see scattered through the code '''Assert.That(...)'''.  These will throw an exception if the condition is not valid.  This format of assertions is called the [http://www.nunit.org/index.php?p=constraintModel&r=2.4 Contraint Model] in NUnit, and provides a large number of tests with the flavor of:
 
You will see scattered through the code '''Assert.That(...)'''.  These will throw an exception if the condition is not valid.  This format of assertions is called the [http://www.nunit.org/index.php?p=constraintModel&r=2.4 Contraint Model] in NUnit, and provides a large number of tests with the flavor of:
 
* Assert.That(foo, Is.Null)
 
* Assert.That(foo, Is.Null)
Line 167: Line 184:
 
=== Simple Positive Tests ===
 
=== Simple Positive Tests ===
 
T010 is an example of a simple positive test.  In it we create and store 3 assets (ensuring no exceptions), then load those 3 assets back from the database and ensure the fields are correct.  Because AssetBase is not IComparible we just check the ID and Name fields with equals tests.  If any of the Asserts fail, the whole test fails.
 
T010 is an example of a simple positive test.  In it we create and store 3 assets (ensuring no exceptions), then load those 3 assets back from the database and ensure the fields are correct.  Because AssetBase is not IComparible we just check the ID and Name fields with equals tests.  If any of the Asserts fail, the whole test fails.
 +
 
=== Stateful Tests ===
 
=== Stateful Tests ===
 
T011 is an example of a stateful test, because it requires the state created by T010 (i.e. the creation of those 3 objects).  In order to test any kind of complicated scenario you will find that you need to use stateful tests to build up various amounts of state (testing along the way), then manipulating and possibly tearing it down.  Without doing this you can't do truly deep testing of function in any complex environment.  This example isn't very stateful (I tried to pick an easy example), but it should give you some ideas.
 
T011 is an example of a stateful test, because it requires the state created by T010 (i.e. the creation of those 3 objects).  In order to test any kind of complicated scenario you will find that you need to use stateful tests to build up various amounts of state (testing along the way), then manipulating and possibly tearing it down.  Without doing this you can't do truly deep testing of function in any complex environment.  This example isn't very stateful (I tried to pick an easy example), but it should give you some ideas.
 +
 
=== Speculative Tests ===
 
=== Speculative Tests ===
 
Speculative tests are tests that might or might not apply in a given situation.  MySQL testing in the OpenSim tree is done by speculative testing, the tests will only run if there is a properly configured database, otherwise they will not be run.  If you execute '''Assert.Ignore()''' in a '''Test''' the test will end and be ignored.  If you run '''Assert.Ignore()''' in the '''TestFixtureSetup''' all tests in the test fixture will be skipped and ignored.
 
Speculative tests are tests that might or might not apply in a given situation.  MySQL testing in the OpenSim tree is done by speculative testing, the tests will only run if there is a properly configured database, otherwise they will not be run.  If you execute '''Assert.Ignore()''' in a '''Test''' the test will end and be ignored.  If you run '''Assert.Ignore()''' in the '''TestFixtureSetup''' all tests in the test fixture will be skipped and ignored.
Line 188: Line 207:
 
Also, if you have created a new test assembly you must add references to it in both ''.nant/local.include'' and ''.nant/bamboo.build'' to ensure that the assembly is added to the automated bamboo runs as well as the nant test target.
 
Also, if you have created a new test assembly you must add references to it in both ''.nant/local.include'' and ''.nant/bamboo.build'' to ensure that the assembly is added to the automated bamboo runs as well as the nant test target.
  
= Executing Tests =
+
== Executing Tests ==
== Bamboo ==
+
 
 +
=== Bamboo ===
 
On every commit to opensim all the tests are run on a [http://opensimulator.org:8085 Bamboo build server on opensimulator.org].  The process takes about 5 minutes to build, test, and report the results back out on #opensim-dev via the osmantis bot.
 
On every commit to opensim all the tests are run on a [http://opensimulator.org:8085 Bamboo build server on opensimulator.org].  The process takes about 5 minutes to build, test, and report the results back out on #opensim-dev via the osmantis bot.
  
== Nant ==
+
=== Nant ===
 
You can manually run all the tests for OpenSim on your system by running '''nant test''' as a nant target.  This will run all the tests that are in the tree, though some speculative tests might be ignored if your platform does not have the right features or configuration to run these tests.
 
You can manually run all the tests for OpenSim on your system by running '''nant test''' as a nant target.  This will run all the tests that are in the tree, though some speculative tests might be ignored if your platform does not have the right features or configuration to run these tests.
  
== NUnit Console ==
+
=== NUnit Console ===
 
If you only want to run tests for one assembly you can do that using the NUnit Console.  On Linux just run '''nunit-console2 OpenSim.Foo.Tests.dll''' and it will run only the tests for OpenSim.Foo.Tests.dll.  If you are only making changes to 1 dll and just want a quick sanity check, this is the fastest way to do that.
 
If you only want to run tests for one assembly you can do that using the NUnit Console.  On Linux just run '''nunit-console2 OpenSim.Foo.Tests.dll''' and it will run only the tests for OpenSim.Foo.Tests.dll.  If you are only making changes to 1 dll and just want a quick sanity check, this is the fastest way to do that.
  
 
+
=== Debugging Tests ===
== Debugging Tests ==
+
 
There is a special page dedicated to this. See [[Debugging Unit Tests]].
 
There is a special page dedicated to this. See [[Debugging Unit Tests]].
  
= Learning More =
+
== Learning More ==
 
You should definitely read the documentation at the [http://www.nunit.org/index.php?p=documentation NUnit homepage] if you want to know more about testing.  It's a very good reference for all the APIs in NUnit that you can use for creating tests.
 
You should definitely read the documentation at the [http://www.nunit.org/index.php?p=documentation NUnit homepage] if you want to know more about testing.  It's a very good reference for all the APIs in NUnit that you can use for creating tests.
  
= Code Coverage =
+
== Code Coverage ==
  
 
A prototype has been included using monocov, which has a profile built-in in mono. Instructions for using monocov can be found on the Mono homepage, in [http://www.mono-project.com/Code_Coverage Code Coverage] section. Unfortunately nunit2 and nant do not support code coverage with monocov (only some other proprietary code coverages), so there is no <monocov> tag. The solution was to implement it using many nunit-console and on <exec> tags.  
 
A prototype has been included using monocov, which has a profile built-in in mono. Instructions for using monocov can be found on the Mono homepage, in [http://www.mono-project.com/Code_Coverage Code Coverage] section. Unfortunately nunit2 and nant do not support code coverage with monocov (only some other proprietary code coverages), so there is no <monocov> tag. The solution was to implement it using many nunit-console and on <exec> tags.  
Line 211: Line 230:
 
ATTENTION: Code coverage only works with mono 1.2.x , any other version will most likely not work. Code coverage development is being put on hold for now until it supports newer mono versions.
 
ATTENTION: Code coverage only works with mono 1.2.x , any other version will most likely not work. Code coverage development is being put on hold for now until it supports newer mono versions.
  
== Running ==
+
=== Running ===
  
 
Download [http://primates.ximian.com/~lupus/monocov-0.2.tar.gz monocov] and [http://prdownloads.sourceforge.net/nunit/NUnit-2.4.8-net-2.0.zip?download nunit-console] if your nunit-console does not work. To test if it works (my Ubuntu's Hardy version did not), just run one of the tests with nunit-console.
 
Download [http://primates.ximian.com/~lupus/monocov-0.2.tar.gz monocov] and [http://prdownloads.sourceforge.net/nunit/NUnit-2.4.8-net-2.0.zip?download nunit-console] if your nunit-console does not work. To test if it works (my Ubuntu's Hardy version did not), just run one of the tests with nunit-console.
Line 217: Line 236:
 
Install monocov (./configure, ./make install, there could be some minor conflicts, I had to add to the compiling line -I/usr/include/mono-1.0/) and make sure the working nunit-console.exe is in /usr/lib/nunit/nunit-console.exe. Now just run nant test-cov and it will generate .cov files and HTML directories in the cov directory. The .cov files can be seen by running monocov on them, and have the same information as the HTML directories.
 
Install monocov (./configure, ./make install, there could be some minor conflicts, I had to add to the compiling line -I/usr/include/mono-1.0/) and make sure the working nunit-console.exe is in /usr/lib/nunit/nunit-console.exe. Now just run nant test-cov and it will generate .cov files and HTML directories in the cov directory. The .cov files can be seen by running monocov on them, and have the same information as the HTML directories.
  
= Testing Todo =
+
== Testing Todo ==
 +
 
 +
=== Coverage ===
  
== Coverage ==
 
 
A prototype has been done and documented. Now we must keep a look if monocov will evolve to support newer mono versions.
 
A prototype has been done and documented. Now we must keep a look if monocov will evolve to support newer mono versions.
  
 
[[Category:Development]]
 
[[Category:Development]]

Revision as of 10:11, 13 January 2009

As OpenSim matures, we are extremely interested in adding more automated verification into the OpenSim source tree. Test exist not only to prevent bugs from creeping in, but also to provide fair warning that the behavior of the system has changed, and tests may need updating.

In OpenSim today we use NUnit tests. Our conventions are:

  1. Tests should not exist inside runtime assemblies, as this makes nunit a production requirement
  2. Tests should be in .Tests.dll assemblies. For instance, the tests for OpenSim.Data.SQLite.dll should be in the OpenSim.Data.SQLite.Tests.dll assembly. This allows for easy removal of test assemblies in products.
  3. Tests should be as close to the code as possible, but not intermingled. So the tests for OpenSim/Data/SQLite should be in OpenSim/Data/SQLite/Tests/. Through the use of the Exclude keyword in prebuild.xml you can ensure that directory is part of OpenSim.Data.SQLite.Tests.dll and not OpenSim.Data.SQLite.dll. See clarification below.
  4. Tests testing a class should be grouped into a test class file called xxxTest.cs, where xxx is the name of the class that is being tested.
  5. Tests should be able to run safely in a production environment. That means that care must be taken not to damage data on the machine that it is being run.
  6. Tests should be deterministic in other words repeatable. Avoid randomness in tests. See good and bad testing practices below.

Contents

Core Functionality Missing Unit Tests

This is a list of core functionality which is not covered by unit tests;

  1. Database Modules (These are mysql tables)
    1. prim shapes
    2. region ban
    3. land
    4. landaccesslist
    5. estate_groups
    6. estate_managers
    7. estate_map
    8. estate_settings
    9. estate_users
    10. estateban
    11. avatarattachments

Writing Tests

Writing a new unit test is pretty easy, and very helpful in increasing the stability of opensim by nailing down bugs. I'm going to present an example here of SQLite Asset testing to show how simple such a test case is to write. The actual in tree SQLite Asset tests are a little different because the code was factored out so that it was easily applied to any database driver, so don't be concerned with the fact that what you see here isn't in the tree.

Clarification: Make sure your master project (not the test project) has an entry for files like the following:

      <Files>
        <Match pattern="*.cs" recurse="true">
          <Exclude name="Tests" pattern="Tests" />
        </Match>
      </Files>

NUnit Conventions

An NUnit test suite:

  • is a class with a default constructor (takes no arguments)
  • has public methods that are tests
  • uses annotations to determine what are tests
  • runs it's tests in alphabetical order by method name

An NUnit test method:

  • must be public
  • must return void
  • must take no arguments
  • is successful if no exception or assert is thrown while running it

The run order is important if you want to have early tests that setup some complicated state (like creating objects), and have later tests remove or update that state. For that reason I find it very helpful to name all test methods Txxx_somename where xxx is a number between 000 and 999. That guaruntees no surprises in run order.

An Example Test - SQLite Assets

using System;
using System.IO;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Framework.SyntaxHelpers;
using OpenSim.Framework;
using OpenSim.Data.Tests;
using OpenSim.Data.SQLite;
using OpenSim.Region.Environment.Scenes;
using OpenMetaverse;
 
namespace OpenSim.Data.SQLite.Tests
{
    [TestFixture]
    public class SQLiteAssetTest
    {
        public string file;
        public string connect;
        public AssetDataBase db;
        public UUID uuid1;
        public UUID uuid2;
        public UUID uuid3;
        public string name1;
        public string name2;
        public string name3;
        public byte[] asset1;
 
        [TestFixtureSetUp]
        public void Init()
        {
            uuid1 = UUID.Random();
            uuid2 = UUID.Random();
            uuid3 = UUID.Random();
            name1 = "asset one";
            name2 = "asset two";
            name3 = "asset three";
 
            asset1 = new byte[100];
            asset1.Initialize();
            file = Path.GetTempFileName() + ".db";
            connect = "URI=file:" + file + ",version=3";
            db = new SQLiteAssetData();
            db.Initialise(connect);
        }
 
        [TestFixtureTearDown]
        public void Cleanup()
        {
            db.Dispose();
            System.IO.File.Delete(file);
        }
 
        [Test]
        public void T001_LoadEmpty()
        {
            Assert.That(db.ExistsAsset(uuid1), Is.False);
            Assert.That(db.ExistsAsset(uuid2), Is.False);
            Assert.That(db.ExistsAsset(uuid3), Is.False);
        }
 
        [Test]
        public void T010_StoreSimpleAsset()
        {
            AssetBase a1 = new AssetBase(uuid1, name1);
            AssetBase a2 = new AssetBase(uuid2, name2);
            AssetBase a3 = new AssetBase(uuid3, name3);
            a1.Data = asset1;
            a2.Data = asset1;
            a3.Data = asset1;
 
            db.CreateAsset(a1);
            db.CreateAsset(a2);
            db.CreateAsset(a3);
 
            AssetBase a1a = db.FetchAsset(uuid1);
            Assert.That(a1a.ID, Is.EqualTo(uuid1));
            Assert.That(a1a.Name, Is.EqualTo(name1));
 
            AssetBase a2a = db.FetchAsset(uuid2);
            Assert.That(a2a.ID, Is.EqualTo(uuid2));
            Assert.That(a2a.Name, Is.EqualTo(name2));
 
            AssetBase a3a = db.FetchAsset(uuid3);
            Assert.That(a3a.ID, Is.EqualTo(uuid3));
            Assert.That(a3a.Name, Is.EqualTo(name3));
        }
 
        [Test]
        public void T011_ExistsSimpleAsset()
        {
            Assert.That(db.ExistsAsset(uuid1), Is.True);
            Assert.That(db.ExistsAsset(uuid2), Is.True);
            Assert.That(db.ExistsAsset(uuid3), Is.True);
        }
    }
}

You can see 4 of the important annotations here:

  • TestFixture - this class is a test suite
  • TestFixtureSetup - this code is always run before any of the tests are executed
  • TestFixtureTearDown - this code is always run after the tests are done executing, even if they fail.
  • Test - this method is a test

Setup / Teardown

In the case of testing something like the database layer, we have to actually attempt to store / retrieve things from a database. Following from rule #4 of good tests, we want to make sure not to touch the production databases to run our tests, so during startup we generate a temporary file name which is guaranteed not to be an existing file on the system, and use that as our database file name. By running db.Initialize() the OpenSim migration code will correctly populate that database with the latest schema.

Once we are done with the tests we want to make sure we aren't leaving garbage temp files on the user's system. So we remove that file we created.

During setup we also create a set of state variables, such as 3 uuids, 3 strings, and a data block. You could have always just stuck these inline, but variables are there for a reason, so use them.

Asserts

You will see scattered through the code Assert.That(...). These will throw an exception if the condition is not valid. This format of assertions is called the Contraint Model in NUnit, and provides a large number of tests with the flavor of:

  • Assert.That(foo, Is.Null)
  • Assert.That(foo, Is.Not.Null)
  • Assert.That(foo, Is.True)
  • Assert.That(foo, Is.EqualTo(bar))
  • Assert.That(foo, Text.Matches( "*bar*" ))

Of note, Is.EqualTo uses the Equals function of foo, so this can only be used on objects that are IComparable. Most of the OpenSim base objects are not, so you'll have to compare fields manually in tests.

For the complete set of conditions you can use see the Contraint Model NUnit documentation. While there is another syntax for tests, the Constraint Model is preferred as it is far more human readable.

Simple Negative Tests

Test T001 is an example of a simple negative test. We assume a new database will not have any of those assets in them. While the value of this test may look low, it does provide a baseline in ensuring that the database connection is there, that these return false correctly, and that no other exception is thrown. Negative tests are a good way to force bounds conditions and ensure that not only does it return what you expect it also doesn't return what you don't expect. Thought of another way, it ensures your code is somewhat defensive in nature, not coughing on bad or unexpected data.

Simple Positive Tests

T010 is an example of a simple positive test. In it we create and store 3 assets (ensuring no exceptions), then load those 3 assets back from the database and ensure the fields are correct. Because AssetBase is not IComparible we just check the ID and Name fields with equals tests. If any of the Asserts fail, the whole test fails.

Stateful Tests

T011 is an example of a stateful test, because it requires the state created by T010 (i.e. the creation of those 3 objects). In order to test any kind of complicated scenario you will find that you need to use stateful tests to build up various amounts of state (testing along the way), then manipulating and possibly tearing it down. Without doing this you can't do truly deep testing of function in any complex environment. This example isn't very stateful (I tried to pick an easy example), but it should give you some ideas.

Speculative Tests

Speculative tests are tests that might or might not apply in a given situation. MySQL testing in the OpenSim tree is done by speculative testing, the tests will only run if there is a properly configured database, otherwise they will not be run. If you execute Assert.Ignore() in a Test the test will end and be ignored. If you run Assert.Ignore() in the TestFixtureSetup all tests in the test fixture will be skipped and ignored.

Speculative testing lets you create tests that require certain preconditions to be met which you can't guarantee on all platforms/configuration, and are an important part of deep testing.

Good / Bad Test practices

Creating good tests is an art, not a science. Tests are useful by how many bugs they find or how many bugs they avoid. Things you should think about in creating good tests is:

  • Throwing edge cases, like 0, "", or Null at parameters. This ensures that people functions are hardened against incomplete data. Many of our crashes come from the lack of this hardening showing up at just the wrong time.
  • Use stateful testing to build up complex scenarios. This is more useful than just cursory get / set calls.
  • Random tests are not a good idea. We need test results to be deterministic. In other words tests need to be repeatable. If you want to test for a range it is good idea to make separate tests for min and max values. Random values in fields can fail randomly. When something goes wrong for example in database schema the developer will not necessarily notice if the stored values are random. On the other hand its hard to troubleshoot randomly failing tests as you dont know which specific value caused the failure.

Adding Tests to the Tree

As we said previously all tests for assembly OpenSim.Foo (in directory OpenSim/Foo) should:

  • be in assembly OpenSim.Foo.Tests.dll
  • not be in the OpenSim.Foo.dll assembly
  • be in OpenSim/Foo/Tests directory

Also, if you have created a new test assembly you must add references to it in both .nant/local.include and .nant/bamboo.build to ensure that the assembly is added to the automated bamboo runs as well as the nant test target.

Executing Tests

Bamboo

On every commit to opensim all the tests are run on a Bamboo build server on opensimulator.org. The process takes about 5 minutes to build, test, and report the results back out on #opensim-dev via the osmantis bot.

Nant

You can manually run all the tests for OpenSim on your system by running nant test as a nant target. This will run all the tests that are in the tree, though some speculative tests might be ignored if your platform does not have the right features or configuration to run these tests.

NUnit Console

If you only want to run tests for one assembly you can do that using the NUnit Console. On Linux just run nunit-console2 OpenSim.Foo.Tests.dll and it will run only the tests for OpenSim.Foo.Tests.dll. If you are only making changes to 1 dll and just want a quick sanity check, this is the fastest way to do that.

Debugging Tests

There is a special page dedicated to this. See Debugging Unit Tests.

Learning More

You should definitely read the documentation at the NUnit homepage if you want to know more about testing. It's a very good reference for all the APIs in NUnit that you can use for creating tests.

Code Coverage

A prototype has been included using monocov, which has a profile built-in in mono. Instructions for using monocov can be found on the Mono homepage, in Code Coverage section. Unfortunately nunit2 and nant do not support code coverage with monocov (only some other proprietary code coverages), so there is no <monocov> tag. The solution was to implement it using many nunit-console and on <exec> tags.

ATTENTION: Code coverage only works with mono 1.2.x , any other version will most likely not work. Code coverage development is being put on hold for now until it supports newer mono versions.

Running

Download monocov and nunit-console if your nunit-console does not work. To test if it works (my Ubuntu's Hardy version did not), just run one of the tests with nunit-console.

Install monocov (./configure, ./make install, there could be some minor conflicts, I had to add to the compiling line -I/usr/include/mono-1.0/) and make sure the working nunit-console.exe is in /usr/lib/nunit/nunit-console.exe. Now just run nant test-cov and it will generate .cov files and HTML directories in the cov directory. The .cov files can be seen by running monocov on them, and have the same information as the HTML directories.

Testing Todo

Coverage

A prototype has been done and documented. Now we must keep a look if monocov will evolve to support newer mono versions.

Personal tools
General
About This Wiki