|Anonymous | Login | Signup for a new account||2021-01-28 05:18 PST|
|Main | My View | View Issues | Change Log | Roadmap | Summary | My Account|
|View Issue Details|
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0007814||opensim||[REGION] Script Functions||public||2016-01-15 22:58||2016-01-16 09:34|
|Platform||Operating System||Operating System Version|
|Product Version||master (dev code)|
|Target Version||Fixed in Version|
|Summary||0007814: Setting a prim's color via script is sometimes slightly off|
|Description||When setting a prim's color via llSetColor or llSetPrimitiveParams (or its *Link* variants) the resulting color is sometimes very slightly off compared to the vector input of the function and as well as on SL.|
|Steps To Reproduce||Script:|
llSetColor(<0.243137, 0.031373, 0.443137>, ALL_SIDES);
llSay(0, "\n \nllGetColor(): " + (string)llGetColor(ALL_SIDES) + "\nllGetLinkPrimitiveParams(): " + llList2String(llGetLinkPrimitiveParams(LINK_THIS, [PRIM_COLOR,
1. Run this script in a new prim on both SL and OpenSim (I used latest master for this test) and compare the results
My testing resulted in the following:
Results in an RGB value in the color picker of 62, 8, 113
llGetColor(): <0.24314, 0.03137, 0.44314>
llGetLinkPrimitiveParams(): <0.243137, 0.031373, 0.443137>
OS 0.9.0 Dev Results:
Results in an RGB value in the color picker of 61, 8, 112
llGetColor(): <0.239216, 0.031373, 0.439216>
llGetLinkPrimitiveParams(): <0.239216, 0.031373, 0.439216>
|Additional Information||I've also tested on a release version of OpenSim with the same results.|
|Tags||No tags attached.|
|Git Revision or version number||Master|
|Run Mode||Standalone (1 Region) , Standalone (Multiple Regions)|
|Physics Engine||ODE, BulletSim|
|Environment||.NET / Windows64|
The issue is caused by the float to int conversion used in SL vs. the one used in OpenSim. I have known about it for a while because it affected the development of the CCS combat system, but never considered it significant.
However, it does exist. Any math wizards may now step forward ;)
It's a very minor issue admittedly; It came up in the development of a color HUD I was making for someone. I had been scratching my head wondering why the results were slightly different on OS vs. SL. It uses presets that can be read from a notecard in either hex, rgb, or sl format and the hud will convert accordingly to SL color format.
At first I had thought maybe there was an error the the conversion code I wrote and chalked it up to that but then thought it peculiar that SL got the color right on the dot whereas OS would not. So I set out to do some much simpler tests-- My initial tests revealed that setting the color by hand and then using one of the Get functions to output the color would match up fine and that lead me to the realization that the "slightly offness" occurred when setting the color through script functions.
I might take a look at the OS source code and see if I can find anything but I'm definitely no math wizard :)
Mata Hari (reporter)
It looks very much as though SL uses round(value*255) whereas Opensim uses (integer)(value*255) which is equivalent to floor(value*255)
Example: the x component in the above script is 0.243137 which, when multiplied by 255 results in 61.999935. SL rounds this to 62, Opensim drops the decimal element.
I don't know where in the code to find it, but that's my guess as to why it's happening. The SL method is correct.
@Mata - That was a good suggestion, I tried playing around with Math.Round() in the SetColor() method (Located in LSL_Api.cs) in the OS source code
texcolor.R = Util.Clip((float)Math.Round(color.x), 0.0f, 1.0f);
in order to round the number before casting it to float; but didn't have a whole lot of luck-- The values still came out slightly off.
A quick look at Utils.Clip() tells me that it is returning Math.Min(Math.Max(x, min), max);
So if I read that right I'm getting what ever is the result of Math.Min(Math.Max((float)Math.Round(color.x), 0.0f), 1.0f); ?
Mata Hari (reporter)
I'm not a coder, so I can't help you with how it needs to be handled in C#. What happens in LSL (and presumably natively in C#?) is when you cast a float value to an integer it simply drops all decimals and returns the integer part of the value.
ie. if you did any of the following....
...the value being returned will always be 1.
When converting a float range of 0.0 to 1.0 to an integer range of 0-255 you would use the LSL code:
What appears to be happening based on the values you're reporting is the LSL equivalent of doing
That second method will always return a value that is less representative of the "actual" colour when the 255*value has a decimal component >=0.5.
I don't know what the distinctions/methods are in C# but that would appear to be the issue.
Seth Nygard (reporter)
Although not perfect, one simple way to achieve a reasonable 4/5 rounding is to add 0.5 to any float value that is being cast or converted to an integer. The truncated value will then be what is generally expected from 4/5 rounding rules. Anything less than 0.5 remains the original whole number value, while anything over is the whole number decimal value +1.
If we take the above example and add 0.5 to each number then the result is closer to what we expect to see ...
(integer)(1.01+0.5) will be 1
(integer)(1.5+0.5) will be 2
(integer)(1.99999999999+0.5) will be 2
(integer)(1.45+0.5) will be 1
|2016-01-15 22:58||mewtwo0641||New Issue|
|2016-01-15 23:03||melanie||Note Added: 0029971|
|2016-01-15 23:23||mewtwo0641||Note Added: 0029973|
|2016-01-16 04:22||Mata Hari||Note Added: 0029976|
|2016-01-16 05:32||mewtwo0641||Note Added: 0029977|
|2016-01-16 07:43||Mata Hari||Note Added: 0029978|
|2016-01-16 09:34||Seth Nygard||Note Added: 0029981|
|Copyright © 2000 - 2012 MantisBT Group|