[Opensim-users] Load Testing
Mariusz Nowostawski
mariusz at nowostawski.org
Thu Mar 5 03:58:11 UTC 2009
Dave Coyle wrote:
> On 2009-03-04 01:33:40 -0500, dirk.krause at pixelpark.com wrote:
>
>> I am not aware of any attempts to get enough users together ...? was
>> this announced on this list? I was sort of waiting in the wings to
>> get all active users of my company together to contribute to such a
>> test, but maybe I just missed it :-/.
>>
>
> Would there be any interest in a regularly scheduled "come and break
> stuff hour" on OSGrid? Time, victim^H^H^H^H target sim, and desired
> activity would be prearranged, and at the appointed time and place
> anyone who's interested logs in and we see what happens. We could
> test TPs or region crossings or messaging modules or rezzing objects
> or sitting in a boat and sailing around in a circle or capacity of
> different hardware combinations. Wright Plaza seems to get a good
> turnout for weekly office hours, but I think everyone goes out of
> their way to not do anything to break stuff. The point of "come and
> break stuff hour" would be to do the opposite.
>
> Also, right now the answer to "what hardware do I need to run
> OpenSim?" is "it depends". It would be great if we could test various
> combinations of hardware/.NET/Mono builds/etc. and get some more
> concrete info on what works, what's mediocre, and what fails.
>
> I haven't been to the "Q&A / Testing" hour on OSGrid in a while, but
> when I did attend it was focused on Q&A.
>
> Say, Thursdays @ 0300 UTC? We could give it a go a couple times and,
> depending on turnout, decide if it's worth making a regular thing.
> And it's an earlier/later time to try to accomodate people in other
> regions/on different schedules.
>
> Anyone interested
Load testing by assembling users in an ad-hoc manner has its place, and
it might be a good way of keeping track of certain server-side behaviour
and user experiences. However, logs from such sessions are limited in
use, because the tests cannot be redone, and no statistics can be
obtained. From our experiences we have observed quite substantial
variations of certain load parameters with no correlation to what was
been actually done in a particular instance. Identifying the causes of
sertain bottlenecks or server behaviour is not easy, and multiple tries
of the same test schema with varying only few parameters are useful in
that regard.
Generally, there are three general types of tests, that people currently
do on OpenSim.
A) ad-hoc gathering of users (as prescribed above), without the ability
to replicate the tests or conditions, and without the ability to draw
statistics on the server behaviour
B) more formal runs with real users (and real user clients) in a
controlled environment, where effectively the same runs can be repeated
over and over again to obtain statistical behaviour and identify
bottlenecks by controlling limited number of factors
C) runs without real clients and without real users, where the inputs to
the system are generated by TestClients that "simulate" real clients -
these can be used as Mic Bowman explained to test various individual
aspects of the server performance and these are ideal for tuning those
isolated modules of the server
Each of these has its place and role, and they all play a complementary
roles. At Otago, we focus primarily on B, where real clients are used to
test the system holistically, and investigate different
cross-dependencies between the server modules. I do not think Type A can
be formalised or automated. As for B, we tried to automate as much as
possible as to make the tests easier to run and manage, but, we are far
from NANT-like automation. It should be possible to automate Case C,
and conduct multiple runs to obtain statistics in a fully automated way.
We are quite interested in that, and will watch where the developments
are heading.
--
cheers
Mariusz
More information about the Opensim-users
mailing list