Shared Services Configuration

=Introduction= In some scenarios, one may want 2 or more separate installations of OpenSimulator rather than one large grid. However, this creates major inconvenience if users need access to both grids - a user account needs to be created on both and they will have entirely different inventory (and hence clothing, body parts and attachments) between OpenSimulator installations.

One approach is to have a single user account on one of the grids and allow the user to travel to other grids via the Hypergrid.

Another approach, which we will detail here, is to share the user account, authentication, avatar, asset and inventory services between multiple installations but not other services, such as grid or griduser. This will allow the two grids to remain seperate (one will not be able to teleport between regions on different grids) whilst allow the user to retain the same account details, inventory and attachments between multiple OpenSimulator installations.

'''This approach should be considered experimental. It is not officially supported by the OpenSimulator project at this time. I have seen it used successfully in a few situations but there may well be corner cases that cause issues. -- Justincc (talk) 22:35, 30 April 2014 (UTC)'''

=Steps=

These are steps for sharing services between two separate OpenSimulator installations (Grid A and Grid B).

We make the following assumptions


 * Both installations are running in grid mode.
 * Each installation is hosted on a separate machine or set of machines. If both installations are on the same machine then you will need to adjust default port numbers on non-shared services so that they will not clash.
 * Hypergrid is inactive.
 * Grid A and Grid B are running on the same local network. The ROBUST service addresses are as follows.

Grid A - 192.168.1.2 (8002 public port, 8003 private port) Grid B - 192.168.1.3 (8002 public port, 8003 private port)

Step 1: Decide which grid installation will host the shared services
One grid's ROBUST instance (more in sophisticated setups) will host the services to be shared between multiple installations. The other grid's simulator and ROBUST configuration will be changed so that it uses the shared services rather than its own. In these instructions, Grid A will host the shared services.

Step 2: Configure the simulators on Grid B to use Grid A's services
On Grid B, we need to configure the simulators so that they connected to Grid B for some service types but Grid A for others.

The services that need to be shared are


 * Asset - to share asset used within inventory items.
 * Authentication - so that login passwords are authenticated against the same stored hashes.
 * Avatar - to share information about worn attachments, clothing and body parts.
 * Inventory - to share inventory items.
 * UserAccount - to share user account information.

To do this, we configure the ServerURI in each relevant service section to connect to the Grid A services rather than Grid B. In this example, the configuration would be as follows

Step 3: Configure Grid B's LoginService to use Grid A services where appropriate
As well as redirecting Grid B's simulators to use Grid A's services, we need to configure Grid B's login service to use Grid A's services.

We need to do this because the login service consults these other services either to identify and authenticate the user (Authentation and UserAccount services) or get information to populate the login response (Avatar, Inventory).

The configuration here is more complex because we need to tell the Grid B login service to use connectors to access Grid A's services rather than instantiate it's own local service instances.

Step 3A: Configure the Login Service to use connectors for shared services
This is done in the [LoginService] section of Robust.ini. Instead of specifying services in local service DLLs, we ask the login service to instantiate connectors.

For instance, the usual UserAccountService configuration here is

which instantiates a local user account service from the OpenSim.Services.UserAccountService.dll. But for a connector configuration. this becomes

In all, the service sections of [LoginService] change from

to

In this case, we don't need to configure the asset service since the login service doesn't need to access this.

We also can't reconfigure the library service, since it doesn't currently have a connector. So grids connected in this manner must have identical library configurations.

Step 3B: Configure the connectors with Grid A's ROBUST service URIs
As well as getting the login service to use connectors to grid A's services, we need to tell those connectors which URIs to use. The configuration here is actually the same as for the simulator. So for each of the connectors we configured above, we need to add the relevant ServerURI entry.

So for the user account service, for instance, we need to add

So in total, the following entries need to be added

Again, this is the same as for simulator cnofiguration above except that we don't need to configure the asset service.

It's also a good idea to comment out all the other entries in those sections (e.g, LocalServiceModule). They very probably won't do any harm by being there but they are not used by the connectors.

Step 3C: Disable unused connectors on Grid B (optional)
This is technically an optional step, but I think that it's a very good idea to disable the Grid B services that are no longer required, in order to save confusion if anything is misconfigured.

To do this, one comments out the connectors made available in the [ServiceList] section of Grid B's robust.ini file. In this case, these are service connectors made available to other parties. So the following connectors should be commented out.

Conclusion
You should now be able to use the same avatar account to log onto Grid A (via http://192.168.1.2:8002 in this example) and to Grid B (via http://192.168.1.3:8002). Users will have the same details, inventory and worn items/attachments on both grids. However, these grids are still conceptually separate - regions can overlap (e.g. grid A and grid B have different regions at 1000,1000) and there is no way to move directly between them without Hypergrid being active.

=Discussion=

Not supported
To emphasise what was said in the introduction, this is not a configuration of OpenSimulator officially supported by the project. As it's not commonly used, you may run into bugs or other issues associated with this unusual setup. Changes over time may require configuration adjustment that is not documented elsewhere. It also means that this configuration is not well tested.

Having said that, there's no reason to think that this will stop working. It relies on fundamental configuration requirements for OpenSimulator (distributed services and connectors) which are necessary for other functionality.

Hypergrid
It's unknown whether Hypergrid will work on this configuration. It may well be possible when each grid has its own service, though extra configuration will definitely be required (e.g. setting up connectors).

Access control
Once somebody has a user account they can move indiscrimnately between the installations sharing those services. It might be possible to limit this with a non-shared authentication service (or even an alternative authentication service implementation) that can control whether a user is allowed access to only Grid A or Grid B.

Without this, if you need to access control you will need to do this either via the normal configuration options (e.g. region access lists) that you would use on a unified grid, or via Hypergrid instead which does have developed and developing access controls. It might also be possible to enforce some access control via network firewall configuration.

Friends Service
If the friends service is shared, then users will see friends who are at other grids. This may be confusing, since they won't be able to communicate with them as IMs are routed only to simulators in the same grid.

On the other hand, if the friends service is not shared then users will have to manage a separate friends list on each server.

Conceptually simpler than the Hypergrid
The chief advantage relates to the other mechanism (the Hypergrid) for allowing users to keep the same account details and inventory on different grids. Unlike the Hypergrid, there are no non-Second Life concepts to understand, such as inventory suitcases or Hypergrid links.

However, this approach doesn't allow a user to move between grids unless they logout from one grid and login back to another, even though those grids use the same account and authentication details. This makes it quite difficult to imagine scenarios where the naive service sharing described above is useful.

Scalability
The scalability issue is complex here.

On the one hand, if one has to have multiple grids with shared user accounts/inventory rather than a single big grid, sharing services will still make efficiencies of scale possible. For instance, if one is using a deduplicating asset service, then deduplication will occur over multiple grids sharing an asset service instead of potentially having two copies of the same asset in 2 different grids.

One could achieve this simply by sharing the asset service as above but none of the others.

However, this same concentration of requests into a shared service may be a scalability problem, requiring the techniques in Performance to scale the shared services. This would not be an issue if the grids were completely separate or if the grid connectivity was achieved via Hypergrid, where both grids retain their own service instances.