[Opensim-dev] Question about the suitable interface

Neil Canham neil at knowsense.co.uk
Wed Feb 22 11:12:15 UTC 2012


Hi Georg
 It looks from the video as if you can already do what you want?  You
already seem to be triggering animations and motions of the avatar based on
motions sensed from the kinect.  So what is the additional sophistication
that you want?  You may well know what I'm about to describe, apologies if
that's the case.  I'm rally summarising what has already been said.

There are two levels at which kinect (or any other motion tracking) can be
used.  One is 'translate and trigger' where the motions of the person are
detected, interpreted as say a nod, wave or head shake and then used to
trigger the same pre-defined animation in the world.  That seems to be what
you already achieved unless I misunderstood the video.  This is most easily
done either by simulating key presses (which I think is what you already
did) or modifying the client to directly launch animations - neater but
probably no real difference in end user experience I suspect
The other way to use the motion tracking is puppeteering - where the avatar
really copies all your motions slavishly (or with filtering/modification)
in real time.  This is what the TUIS guys in Japan have done (video Justin
linked, and opensource code here
http://www.nsl.tuis.ac.jp/xoops/modules/xpwiki/?SLKinect )  - this requires
an additional animation relay server to stream the animation data in real
time to all the clients since there is no realtime streaming animation
protocol in SL/Opensim.

I guess an intriguing third possibility is to try to capture animations in
realtime at the client end, convert them to SL/Opensim format animations,
upload them and trigger them.  But that would be very laggy indeed,
initially at least.

So - it all depends exactly what you want to achieve, what the use case is.

Neil Canham
CTO vComm Solutions GmbH

On Wed, Feb 22, 2012 at 10:36 AM, Georg Janke <djs_fan_and_freak at web.de>
wrote:
> Thanks for the many replies.
>
> Now to me the best practice seems to be the solution that uses the viewer.
> I've already seen the majority of the links before. There is some really
> good stuff among it.
> One important questions remains for me: In case I use the viewer solution,
> do other clients see the gesture animations too? This is crucial because I
> am going to control communication supporting gestures. If just I can see
my
> avatar doing them it would be useless.
>
> @Rock: Yeah I've read both articles before, handy stuff. But my aim is
> different.
>
> @Nebadon Izumi: Thanks for the link list. In a workshop two students and
me
> created something similar to the content of your second link
> Here is a video showing the result:
> http://www.youtube.com/watch?v=tw27aZdklek
> For that solution we triggered virtual key strokes, but now I want
something
> more sophisticated.
>
>
> _______________________________________________
> Opensim-dev mailing list
> Opensim-dev at lists.berlios.de
> https://lists.berlios.de/mailman/listinfo/opensim-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://opensimulator.org/pipermail/opensim-dev/attachments/20120222/8413dbc8/attachment-0001.html>


More information about the Opensim-dev mailing list