Automated Testing
From OpenSimulator
This article or section contains obsolete and possibly misleading information. Please integrate this information into existing pages, update it to something more recent, or remove it. |
Contents |
Overview
As OpenSim matures, we are extremely interested in adding more automated verification into the OpenSim source tree. Test exist not only to prevent bugs from creeping in, but also to provide fair warning that the behavior of the system has changed, and tests may need updating.
In OpenSim today we use NUnit tests. Our conventions are: 1. Tests should not exist inside runtime assemblies, as this makes nunit a production requirement 1. Tests should be in .Tests.dll assemblies. For instance, the tests for OpenSim.Data.SQLite.dll should be in the OpenSim.Data.SQLite.Tests.dll assembly. This allows for easy removal of test assemblies in products. 1. Tests should be as close to the code as possible, but not intermingled. So the tests for OpenSim/Data/SQLite should be in OpenSim/Data/SQLite/Tests/. Through the use of the Exclude keyword in prebuild.xml you can ensure that directory is part of OpenSim.Data.SQLite.Tests.dll and not OpenSim.Data.SQLite.dll. 1. Tests should be able to run safely in a production environment. That means that care must be taken not to damage data on the machine that it is being run.
Writing New Tests
Writing a new unit test is pretty easy, and very helpful in helpful in increasing the stability of opensim by nailing down bugs. I'm going to present an example here of SQLite Asset testing to show how simple such a test case is to write. The actual in tree SQLite Asset tests are a little different because the code was factored out so that it was easily applied to any database driver, so don't be concerned with the fact that what you see here isn't in the tree.
NUnit Conventions
An NUnit test suite:
- is a class with a default constructor (takes no arguments)
- has public methods that are tests
- uses annotations to determine what are tests
- runs it's tests in alphabetical order by method name
An NUnit test method:
- must be public
- must return void
- must take no arguments
- is successful if no exception or assert is thrown while running it
The run order is important if you want to have early tests that setup some complicated state (like creating objects), and have later tests remove or update that state. For that reason I find it very helpful to name all test methods Txxx_somename where xxx is a number between 000 and 999. That guaruntees no surprises in run order.
An Example Test - SQLite Assets
using System;
using System.IO;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Framework.SyntaxHelpers;
using OpenSim.Framework;
using OpenSim.Data.Tests;
using OpenSim.Data.SQLite;
using OpenSim.Region.Environment.Scenes;
using OpenMetaverse;
namespace OpenSim.Data.SQLite.Tests {
[TestFixture] public class SQLiteAssetTest { public string file; public string connect; public AssetDataBase db; public UUID uuid1; public UUID uuid2; public UUID uuid3; public byte[] asset1; [TestFixtureSetUp] public void Init() { uuid1 = UUID.Random(); uuid2 = UUID.Random(); uuid3 = UUID.Random(); asset1 = new byte[100]; asset1.Initialize(); file = Path.GetTempFileName() + ".db"; connect = "URI=file:" + file + ",version=3"; db = new SQLiteAssetData(); db.Initialise(connect); }
[TestFixtureTearDown] public void Cleanup() { db.Dispose(); System.IO.File.Delete(file); }
[Test] public void T001_LoadEmpty() { Assert.That(db.ExistsAsset(uuid1), Is.False); Assert.That(db.ExistsAsset(uuid2), Is.False); Assert.That(db.ExistsAsset(uuid3), Is.False); } [Test] public void T010_StoreSimpleAsset() { AssetBase a1 = new AssetBase(uuid1, "asset one"); AssetBase a2 = new AssetBase(uuid2, "asset two"); AssetBase a3 = new AssetBase(uuid3, "asset three"); a1.Data = asset1; a2.Data = asset1; a3.Data = asset1; db.CreateAsset(a1); db.CreateAsset(a2); db.CreateAsset(a3);
AssetBase a1a = db.FetchAsset(uuid1); Assert.That(a1.ID, Is.EqualTo(a1a.ID)); Assert.That(a1.Name, Is.EqualTo(a1a.Name));
AssetBase a2a = db.FetchAsset(uuid2); Assert.That(a2.ID, Is.EqualTo(a2a.ID)); Assert.That(a2.Name, Is.EqualTo(a2a.Name));
AssetBase a3a = db.FetchAsset(uuid3); Assert.That(a3.ID, Is.EqualTo(a3a.ID)); Assert.That(a3.Name, Is.EqualTo(a3a.Name)); }
[Test] public void T011_ExistsSimpleAsset() { Assert.That(db.ExistsAsset(uuid1), Is.True); Assert.That(db.ExistsAsset(uuid2), Is.True); Assert.That(db.ExistsAsset(uuid3), Is.True); } }
}
Integration into the Build Tree
Good / Bad Test practices
Running Tests
Goal
To create a system of automatic building, testing, and simulation so we can have better awareness of our changes to OpenSim.
Summary
To properly test OpenSim, we will need to address a number of objectives:
- Crossplatform Support
- Build Stability
- Code Test
- Code Performance
- Runtime Performance
- Code Style
- Web site results
- Communication
We will be using a number of in-house technologies as well as 3rd party (NUnit, Nant, etc.) to address our challenges. We will strive to maintain all our testing tools in a package that any one can use on any computers with any operating system. For convience, a centralized site for this testing result information on opensim will be created on http://testing.ambientunion.com .
People
- Daedius Moskvitch ( daedius @@@@ daedius.com)
Testing Objectives
Crossplatform Support
There are a number of objectives that should report on which operating system they ran
- Build Stability - Make sure all our builds compile on every operating system
- Code Test - Make sure all our tests run on every operating system
- Code Performance - See how our functions run on every operating system
- Runtime Performance - See how operating systems actually run opensim
Build Stability
Ensure that the build is intact.
Code Test
Use NUnit, NCover
Code Performance
Need to ponder
Runtime Performance
Code Introspection
Use Gendarme
Web site results
Results of our testing should be accessible web service. This web service will be available via a maintained plugin for build bot so when running a build bot master, one only need a website that can access the web service and properly show it how they wish. Our web service needs to be able to hand off a variety of information:
- Realtime status information of operations
- Crossplatform Support information on tests that have been run for some spiffy grids
- Build results for showing us what happens when things break
- Code test results for showing us what tasts ran
- Code performance for showing us how all our profiled code worked
- Runtime Performance statistics information for bar graphs and charts
- Code Style results for showing which code structure names are non-standard
Communication
We should be able to notify the right people of various important changes (or not so important if they wish) via Email, IRC, etc.