[Opensim-dev] Update for group module & flotsam

Nebadon Izumi nebadon2025 at gmail.com
Sun Oct 28 16:18:42 UTC 2012


Michelle,

  Log into the wiki and go to this page :
http://opensimulator.org/wiki/Special:Preferences I beleive here there is a
link you can click to confirm your email address, once you do this you
should be able to make edits, we now require people to confirm their email
because of all the spam we were getting.  Hope this helps, please let me
know if you have trouble.

On Sun, Oct 28, 2012 at 2:01 PM, Michelle Argus <argus at archimuh.de> wrote:

> Thx Justin, I cant edit the proposal page though, not a member of
> emailconfirmed group.
>
> If routing the message through a gridwide service is not  going to be
> implemented, I could also imagine a relay service which could have this
> function. Event regions could connect to this, or hosters could manage
> their own relay service which caches data needed for thise group functions
> which require more resources. But more on that once i can edit the proposal
> page.
>
>  Generaly speaking, we see similar issues with all requests made. In many
> cases one could join multiple requests together into 1 request. Especialy
> we eurpeans in OSgrid are facing more problems with slow requests and we
> urgantly need relay points for assets, presence, online status of
> friendslists, map tiles, inventory etc. that decrease issues due to slow
> requests. This would also help those nearer to the gridservers as less
> requests would be sent directly to the gridservers. Baning of viewers is
> currently the only way how we can minimize crashed due to the massiv amount
> of requests some viewers cause.
>
>
> Am 23.10.2012 04:40, schrieb Justin Clark-Casey:
>
>  Apologies, it's [1].   Please feel free to edit it as you see fit - I've
>> put you as one of the proposers.  This page is to keep track of the issue
>> rather than a formal proposal mechanism.
>>
>> No rush on this - please feel free to take your time in responding.  In
>> truth, I only have a certain amount of time for these issues currently
>> myself.
>>
>> Having messages route through a service rather than be largely handled by
>> simulators themselves is an interesting approach.  It's the argument of a
>> distributed versus a more centralized architecture.  Although I can't see
>> OpenSimulator going down this route in the near future, if anybody wants to
>> experiment and needs additional config settings then patches are very
>> welcome.
>>
>> [1] http://opensimulator.org/wiki/**Feature_Proposals/Improve_**
>> Groups_Service<http://opensimulator.org/wiki/Feature_Proposals/Improve_Groups_Service>
>>
>> On 20/10/12 11:06, Michelle Argus wrote:
>>
>>> Justin, could you post the url to the suggestion page, I think you
>>> forgot to add it ;)
>>>
>>>   One issue that having the sim updating online status is, that if
>>> someone has the group module diabled or uses a
>>> diffrent setting then the status is not updated. As other modules hosted
>>> by the grids also might this information, one
>>> should consider adding something to the gridserver for this.
>>>
>>> I also like the idea from Akira to have the groupserver to receive the
>>> full IM and then sending it to everyone instead
>>> of having the sim send the message. One then could have a specialized
>>> server installed for the group module which cannot
>>> create any lagissues simside. This could then also be used for a
>>> gridwide spamfilter or filtering illigal activities
>>> within the grid.
>>>
>>> Havnt had much time though as I have a longer event running which ends
>>> on sunday...
>>>
>>>
>>> Am 20.10.2012 04:32, schrieb Justin Clark-Casey:
>>>
>>>> Regarding the groups work, I have now implemented an OpenSimulator
>>>> experimental option, MessageOnlineUsersOnly in
>>>> [Groups] as of git master 1937e5f.  When set to true this will only
>>>> send group IMs to online users.  This does not
>>>> require a groups service update.  I believe OSGrid is going to test
>>>> this more extensively soon though it appears to
>>>> work fine on Wright Plaza.
>>>>
>>>> It's temporarily a little spammy on the console right now (what isn't!)
>>>> with a debug message that says how many online
>>>> users it is sending to and how long a send takes.
>>>>
>>>> Unlike Michelle's solution, this works by querying the Presence service
>>>> for online users, though it also caches this
>>>> data to avoid hitting the presence service too hard.
>>>>
>>>> Even though I implemented this, I'm not convinced that it's the best
>>>> way to go - I think Michelle's approach of
>>>> sending login/logoff status directly from simulator to groups service
>>>> could still be better.  My chief concern with
>>>> the groups approach is the potential inconsistency between online
>>>> status stored there and in the Presence service.
>>>> However, this could be a non-issue. Need to give it more thought.
>>>>
>>>> On 14/10/12 22:53, Akira Sonoda wrote:
>>>>
>>>>> IMHO finding out which group members are online and sending group
>>>>> IM/Notice etc. to them actually should not be done by
>>>>> the region server from which the group IM/notice etc. is sent.
>>>>> This is a task which should be done centrally in case of OSgrid in
>>>>> Dallas TX (
>>>>> http://wiki.osgrid.org/index.**php/Infrastructure<http://wiki.osgrid.org/index.php/Infrastructure>). The region server should only collect the group IM/notice etc. and
>>>>> send it to the central group server or in the other way receiving
>>>>> IM/notice etc. from the central group server and
>>>>> distribute it to the Agents active on the region(s).
>>>>>
>>>>
>>>> That concentrates all distribution on a central point rather than
>>>> spreading it amongst simulators.  Then OSGrid has
>>>> the problem of scaling this up.
>>>>
>>>> Having said that, there are advantages to funnelling things through a
>>>> reliable central point.  As to which is better
>>>> is a complicated engineering issue - the kind of which there are many
>>>> in the MMO/VW space.
>>>>
>>>>
>>>>> But there are even other places which can and should be improved. I
>>>>> did some tests with some viewers counting the web
>>>>> requests to the central infrastructure:
>>>>>
>>>>> Test 1: Teleport from a Plaza to one of my regions located on a server
>>>>> in Europe and afterwards logging out:
>>>>>
>>>>> Cool VL Viewer: 912 Requests mostly SynchronousRestForms POST
>>>>> http://presence.osgrid.org/**presence<http://presence.osgrid.org/presence>( i guess to inform
>>>>> all my 809 friends [mostly only 5% online] I am going offline because
>>>>> the calls to the presence service were done after
>>>>> i closed the viewer)
>>>>> Singularity Veiwer: 921 Requests mostly calls to presence after logoff
>>>>> Teapot viewer: 910 Requests mostly calls to presence after logoff
>>>>> Astra Viewer: 917 Requests mostly calls to presence after logoff
>>>>> Firestorm: 1005 Requests mostly calls to presence after logoff
>>>>> Imprudence: 918 mostly calls to presence after logoff
>>>>>
>>>>> So far so good. I have no idea why my 760 offline friends have to be
>>>>> informed that I went offline ...
>>>>> (Details can be found here: https://docs.google.com/open?**id=**
>>>>> 0B301xueh1kxdNG1wLWo2YVVfYjA<https://docs.google.com/open?id=0B301xueh1kxdNG1wLWo2YVVfYjA>)
>>>>>
>>>>> Test 2: Direct Login onto my Region and then Logoff-( with
>>>>> FetchInventory2 disabled )
>>>>>
>>>>> Cool VL Viewer: 2232 Requests mostly calls to presence ~800 during
>>>>> login and ~800 during logout and xinventory
>>>>> Singularity Viwer: 2340 Requests mostly calls to presence and
>>>>> xinventory
>>>>> Teapot Viewer: Produced 500+ Threads in a very short time and then the
>>>>> OpenSim.exe crashed
>>>>> Astra Viewer: 2831 Request mostly calls to presence and xinventory
>>>>> Firestorm Viwer: ACK Timeout for me. OpenSim.exe survived on 500
>>>>> Threads for 30+ minutes producing 4996 Requests mostly
>>>>> xinventory
>>>>> Imprudence: 1745 Requests mostly presence
>>>>>
>>>>> Again why do all my 809 friends have do be verified with single
>>>>> requests? Then why this difference in xinventory
>>>>> Requests? And why are both Teapot and Firestorm producing so many
>>>>> Threads in such a short time? and bring OpenSim.exe to
>>>>> crash or closely to crash ...
>>>>> ( Details can be found here: https://docs.google.com/open?**id=**
>>>>> 0B301xueh1kxdMDJxWm5UR2QtU2c<https://docs.google.com/open?id=0B301xueh1kxdMDJxWm5UR2QtU2c>)
>>>>>
>>>>
>>>> The presence information is useful data and it was possible in git
>>>> master commit da2b23f to change the Friends module
>>>> to fetch all presence data in one call for status notification when a
>>>> user goes on/offline, rather than make a
>>>> separate call for each friend.
>>>>
>>>> This should be more efficient since only the latency and resources of
>>>> one call is required.  However, since each
>>>> friend still has to be messaged separately to tell them of the status
>>>> change I'm not sure how much practical effect
>>>> this will have.
>>>>
>>>>
>>>>> Test 3: Direct Login to my Region with FetchInventory2 enabled.
>>>>>
>>>>> Teapot Viewer: I closed the viwer after 30 minutes. Number of Threads
>>>>> were still rising up to 260. In the end i counted
>>>>> 30634 xinventory requests... My Inventory has 14190 items !!!
>>>>> Firestorm Viwer: Quite normal approx 2020 Requests ... quite some slow
>>>>> FetchInventoryDescendandts2 Caps. with 100 sec
>>>>> max
>>>>>
>>>>
>>>> Regarding inventory service, unfortunately many viewers appear to
>>>> behave very aggressively when fetching inventory
>>>> information.  For instance, I'm told that if you have certain types of
>>>> AO enabled - some viewers will fetch your
>>>> entire inventory.  The LL infrastructure may be able to cope with this
>>>> but the more modest machines running grids can
>>>> have trouble, it seems.
>>>>
>>>> I'm not sure what the long term solution is.  I suspect it's possible
>>>> to greatly increase inventory fetch efficiency,
>>>> possibly by some kind of call batching.  Or perhaps there's some
>>>> viewer-side caching that OpenSimulator isn't working
>>>> with properly.
>>>>
>>>>
>>>>> ( Details can be found here: https://docs.google.com/open?**id=**
>>>>> 0B301xueh1kxdNEtEeUVFamU1QUE<https://docs.google.com/open?id=0B301xueh1kxdNEtEeUVFamU1QUE>)
>>>>>
>>>>> Just my observations this week end.
>>>>> Akira
>>>>>
>>>>>
>>>>>
>>>>> 2012/10/13 Justin Clark-Casey <jjustincc at googlemail.com <mailto:
>>>>> jjustincc at googlemail.**com <jjustincc at googlemail.com>>>
>>>>>
>>>>>     Hi Michelle.  I've now had some more time to think about this. In
>>>>> fact, I established a proposal summary page at
>>>>>     [1] which I'll change as we go along (or please feel free to
>>>>> change yourself).  We do need to fix this problem of
>>>>>     group IM taking massive time with groups that aren't that big.
>>>>>
>>>>>     I do like the approach of caching online status (and login time)
>>>>> in the groups service.
>>>>>
>>>>>     1.  It's reasonably simple.
>>>>>     2.  One network call to fetch online group members per IM.
>>>>>     3.  May allow messaging across multiple OpenSimulator
>>>>> installations.
>>>>>
>>>>>     However, this approach does mean
>>>>>
>>>>>     1.  Independently updating the groups services on each
>>>>> login/logout.  I'm not saying this is a problem, particularly
>>>>>     if it saves traffic later on.
>>>>>     2.  Groups service has to deal with extra information. Again, this
>>>>> is fairly simple so not necessarily a fatal
>>>>>     issue though it does mean every groups implementations needs to do
>>>>> this in some manner.
>>>>>     3.  Online cache is not reusable by other services in the future.
>>>>>
>>>>>     On a technical note, the XmlRpc groups module does in theory cache
>>>>> data for 30 seconds by default, so a change in
>>>>>     online status may not be seen for upto 30 seconds.  I personally
>>>>> think that this is a reasonable tradeoff.
>>>>>
>>>>>     Rather, of the above cons, 3 is the one I'm finding most serious.
>>>>>  If other services would also benefit from online
>>>>>     status caching in the future, they would have to implement their
>>>>> own caches (and be updated from simulators).
>>>>>
>>>>>     I do agree that making a GridUser.LoggedIn() call for every single
>>>>> group member on every single IM is unworkable.
>>>>>       Even if this is only done once and cached for a certain period
>>>>> of time it could be a major issue for large groups.
>>>>>
>>>>>     So an alternative approach could be to add a new call to GridUser
>>>>> service (maybe LoggedIn(List<UUID>) that will only
>>>>>     return GridInfo for those that are logged in.  This could then be
>>>>> cached simulator-side for a certain period of time
>>>>>     (e.g. 30 seconds like the groups information) and used for group
>>>>> IM.
>>>>>
>>>>>     This has the advantages that
>>>>>
>>>>>     1.  Groups and future services don't need to do their own login
>>>>> caching.
>>>>>     2.  Future services can use the same information and code rather
>>>>> than have to cache login information themselves.
>>>>>
>>>>>     However, it does
>>>>>
>>>>>     1.  Require GridUserInfo caching simulator-side, I would judge
>>>>> this to be a more complex approach.
>>>>>     2.  Mean that during the cache period, new online group messages
>>>>> will not receive messages.  (this is going to
>>>>>     happen with GetGroupMembers() caching anyway).
>>>>>     3.  Traffic is still generated to the GridUser service at the end
>>>>> of every simulator-side caching period.  This is
>>>>>     probably not a huge burden.
>>>>>
>>>>>     So right now, I'm somewhat more in favour of a GridUserInfo
>>>>> simulator-side caching approach than caching login
>>>>>     information within the groups service.  However, unlike you, I
>>>>> haven't actually tried to implement this approach so
>>>>>     there may well be issues that I haven't seen.
>>>>>
>>>>>     What do you think, Michelle (or anybody else)?
>>>>>
>>>>>
>>>>>     On 10/10/12 19:47, Michelle Argus wrote:
>>>>>
>>>>>         http://code.google.com/p/__**flotsam/<http://code.google.com/p/__flotsam/><
>>>>> http://code.google.com/p/**flotsam/<http://code.google.com/p/flotsam/>>
>>>>> is the the current flotsam version and
>>>>>         points to the github repro which I forked and
>>>>>         then patched.
>>>>>
>>>>>         None of the changes I proposed in my git fork have been
>>>>> implemented, neither in opensim nor in flotsam.
>>>>>
>>>>>            Consider my proposal as a quick fix for the time beeing
>>>>> which does not solve all other issues mentioned by
>>>>> later
>>>>>         mailings.
>>>>>
>>>>>         Am 09.10.2012 10:24, schrieb Ai Austin:
>>>>>
>>>>>             Michelle Argus on Wed Oct 3 18:00:23 CEST 2012:
>>>>>
>>>>>                 I have added some changes to the group module of
>>>>> OpenSim and the flotsam server.
>>>>>                 ...
>>>>>                 The changes can be found in the 2 gits here:
>>>>> <https://github.com/**MAReantals__ <https://github.com/MAReantals__>>
>>>>> https://github.**com/MAReantals <https://github.com/MAReantals>
>>>>>
>>>>>                 NB: Both changes to flotsam and opensim are backward
>>>>> compatible and do
>>>>>                 not require that both parts are updated. If some
>>>>> simulators are not
>>>>>                 updated it can happen that some groupmembers do not
>>>>> receive
>>>>>                 groupmessages as their online status is not updated
>>>>> correctly. In a grid
>>>>>                 like OSgrid my recomendation would thus be to first
>>>>> update the
>>>>>                 simulators and at a later stage flotsam.
>>>>>
>>>>>
>>>>>             Hi Michelle... I am looking at what is needed to update
>>>>> the Openvue grid which is using the flotsam
>>>>> XmlRpcGroups
>>>>>             module.  the GITHub repository has the changes from a few
>>>>> days ago... but I wonder if there has been an
>>>>>             update/commit
>>>>>             into the main Opensim Github area already.  I cannot see a
>>>>>  related commit looking back over the last week
>>>>>             or so.  Is
>>>>>             the core system updated so this module is up to date in
>>>>> that?  I also note that the Opensim.ini.example file
>>>>>             contains
>>>>>             a reference to http://code.google.com/p/__**flotsam/<http://code.google.com/p/__flotsam/><
>>>>> http://code.google.com/p/**flotsam/<http://code.google.com/p/flotsam/>>
>>>>> for details of how to
>>>>>             install the service.. but that seems to be
>>>>>             pointing at an out of date version?
>>>>>
>>>>>             I think for the flotsam php end it is straightforward and
>>>>> I obtained the changed groups.sql and
>>>>> xmlrpc.php files
>>>>>             needed.  But note that people are still pointed via the
>>>>> opensim.ini.example comments at the old version on
>>>>>             http://code.google.com/p/__**flotsam/<http://code.google.com/p/__flotsam/><
>>>>> http://code.google.com/p/**flotsam/<http://code.google.com/p/flotsam/>>
>>>>> so that either needs updating to teh
>>>>>             latest version, or the comment in
>>>>>             opensim.ini.exmaple needs to be changed.
>>>>>
>>>>>             To avoid mistakes, I wonder if you can clarify where to go
>>>>> for the parts needed and at what revision/date of
>>>>>             OpenSim
>>>>>             0.7.5 dev master this was introduced, what to get and what
>>>>> to change for an existing service in terms of the
>>>>>             data base
>>>>>             tables, OpenSim.exe instance and the web support php code
>>>>> areas?
>>>>>
>>>>>             Thanks Michelle, Ai
>>>>>
>>>>> ______________________________**___________________
>>>>>             Opensim-dev mailing list
>>>>>             Opensim-dev at lists.berlios.de <mailto:Opensim-dev at lists.**
>>>>> berlios.de <Opensim-dev at lists.berlios.de>>
>>>>> https://lists.berlios.de/__**mailman/listinfo/opensim-dev<https://lists.berlios.de/__mailman/listinfo/opensim-dev><
>>>>> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>>>>> >
>>>>>
>>>>>
>>>>>         ______________________________**___________________
>>>>>         Opensim-dev mailing list
>>>>>         Opensim-dev at lists.berlios.de <mailto:Opensim-dev at lists.**
>>>>> berlios.de <Opensim-dev at lists.berlios.de>>
>>>>> https://lists.berlios.de/__**mailman/listinfo/opensim-dev<https://lists.berlios.de/__mailman/listinfo/opensim-dev><
>>>>> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>>>>> >
>>>>>
>>>>>
>>>>>
>>>>>     --
>>>>>     Justin Clark-Casey (justincc)
>>>>>     OSVW Consulting
>>>>>     http://justincc.org
>>>>>     http://twitter.com/justincc
>>>>>     ______________________________**___________________
>>>>>     Opensim-dev mailing list
>>>>>     Opensim-dev at lists.berlios.de <mailto:Opensim-dev at lists.**
>>>>> berlios.de <Opensim-dev at lists.berlios.de>>
>>>>>     https://lists.berlios.de/__**mailman/listinfo/opensim-dev<https://lists.berlios.de/__mailman/listinfo/opensim-dev><
>>>>> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>>>>> >
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ______________________________**_________________
>>>>> Opensim-dev mailing list
>>>>> Opensim-dev at lists.berlios.de
>>>>> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>>>>>
>>>>>
>>>>
>>>>
>>> ______________________________**_________________
>>> Opensim-dev mailing list
>>> Opensim-dev at lists.berlios.de
>>> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>>>
>>>
>>
>>
> ______________________________**_________________
> Opensim-dev mailing list
> Opensim-dev at lists.berlios.de
> https://lists.berlios.de/**mailman/listinfo/opensim-dev<https://lists.berlios.de/mailman/listinfo/opensim-dev>
>



-- 
Michael Emory Cerquoni
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://opensimulator.org/pipermail/opensim-dev/attachments/20121028/4f572e1f/attachment-0001.html>


More information about the Opensim-dev mailing list