Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

This might be general to the point of meaningless, but say you have three shader algorithms - one that just applies colour, one that blends a texture and a colour, and one that blends two textures and a colour. Is it generally better to have a single program that takes two textures and a colour, and somehow works out what needs to be used (say with non-zero blend parameters), or is it better to have three specialised programs and switch between them as necessary? How about if there's only one texture at most?

The talk of cost in switching the state machine around is making me wonder if it's better to put some logic in the shaders, especially when it's fairly basic. This is only a few quads (<50) in ES but I'm doing some generalisation, and if it's worth changing this while I'm at it then I might as well!

Adbot
ADBOT LOVES YOU

Colonel J
Jan 3, 2008
I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but :effort:

Jewel
May 2, 2009

brian posted:

So I was wondering why I have to add the +0.01 to the x of the tex2D call, if I have it without the +0.01 it misses one of the colours and everything is indexed wrong and I suspect if when I make it support multiple lines of palettes per file for power of 2 textures and whatnot, i'll have to add the same to the y. Any help would be fab.

Really not quite sure since I haven't given it a good look yet, but remember that things are 0 indexed so you might have to use (length - 1) or (width - 1) somewhere. It sounds like that could be the issue based on past experiences.

Edit: Oh! Also, sampler lookups are on texture edges, not their centers. You have to add half of the width of a pixel to the coord to get the center of the pixel. Something like: curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));

Jewel fucked around with this message at 13:08 on Nov 6, 2014

brian
Sep 11, 2001
I obtained this title through beard tax.

Jewel posted:

Really not quite sure since I haven't given it a good look yet, but remember that things are 0 indexed so you might have to use (length - 1) or (width - 1) somewhere. It sounds like that could be the issue based on past experiences.

Edit: Oh! Also, sampler lookups are on texture edges, not their centers. You have to add half of the width of a pixel to the coord to get the center of the pixel. Something like: curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));

Ah that explains a lot, does that mean if i'm having to subtract from the y by half a pixel it means that my y indexing is off? It's all so very hard to debug with these things, thanks a bunch though!

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Colonel J posted:

I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but :effort:

I, for one, am not really sure what you're asking? Are you working in different modelling applications or different graphics APIs? This distinction is fairly important. If it's APIs, I'm pretty sure OpenGL consistently has (0,1,0) as its camera up-vector and (0,0,-1) as its camera direction vector, regardless of OS or computer architecture. I'm not sure with Direct3D, but if you're working in both, you'd probably have to write two vastly different frameworks anyway. If you're trying to draw a model you've imported with your own code, it should be a simple issue of defining a (series of) rotation matrix/matrices that orients it correctly and just applying that to every model you get from that particular modelling application.

If you're talking about different modelling applications, can't you just rotate whatever you're importing until it has the orientation you want? I haven't really worked a lot in modelling, but I don't imagine that'd be very hard.

Jewel
May 2, 2009

brian posted:

Ah that explains a lot, does that mean if i'm having to subtract from the y by half a pixel it means that my y indexing is off? It's all so very hard to debug with these things, thanks a bunch though!

I'd say get your math and put it into python or something and run through it on the cpu and check against manually calculated values to see if you're getting where you should or not (when you get the 0-1 value, multiply it by image width/height and check where that falls on the real image).

Edit: vvv Oh jeez I forgot they didn't as well. Thanks for the reminder!

Jewel fucked around with this message at 02:21 on Nov 7, 2014

brian
Sep 11, 2001
I obtained this title through beard tax.

Jewel posted:

I'd say get your math and put it into python or something and run through it on the cpu and check against manually calculated values to see if you're getting where you should or not (when you get the 0-1 value, multiply it by image width/height and check where that falls on the real image).

Haha I finally worked it out, I was under the assumption that texture coordinates started in the top left for some reason, it worked fine with a texture with only two lines because it wrapped around laffo, it all makes sense now, thanks!

fritz
Jul 26, 2003

This is probably the wrong thread, but Creative Convention didn't look promising: if I have a rigged skeleton in a .mb or .ma file, how do I get it out without having a maya license? I can get skeletons out of .fbx with the sdk, but maya looks tougher.

Jewel
May 2, 2009

fritz posted:

This is probably the wrong thread, but Creative Convention didn't look promising: if I have a rigged skeleton in a .mb or .ma file, how do I get it out without having a maya license? I can get skeletons out of .fbx with the sdk, but maya looks tougher.

It seems Maya's format is stuck inside Maya, and unless someone's written a conversion tool from .ma-> anything really, you're out of luck :(

Though you can probably just use a student copy of Maya worst case, though it is kind of big; it's very easy to get.

If you can get the file into a .3DS you can maybe use this http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=22694909

fritz
Jul 26, 2003

Jewel posted:

It seems Maya's format is stuck inside Maya, and unless someone's written a conversion tool from .ma-> anything really, you're out of luck :(

Though you can probably just use a student copy of Maya worst case, though it is kind of big; it's very easy to get.

If you can get the file into a .3DS you can maybe use this http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=22694909

Yeah :(

Going to try the free trial version tomorrow, I don't need anything fancy just the file export, and if I can do all the exports at once I can be done with it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
.ma files are ASCII and have a spec (http://download.autodesk.com/us/may...umber=d0e677725).

I think it has the same flavor of problems that FBX/COLLADA can have though: Basically everything you want is buried under multiple layers of functionality, it takes a good amount of work to dig the data you want out, and you have to convert anything that's in a format that you can't handle (which can potentially be very difficult).

I don't know if .ma is any worse in that respect than FBX/COLLADA, but at least with FBX/COLLADA there are third-party converters already.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's a dump of Maya-specific features. There's a lot of random bitfields and flags in it with no explanation of the values, and the spec doesn't help.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
What's the best way to include shaders in a webGL program? All the tutorials I've seen so far just put the shaders directly into the HMTL but surely there has to be a better way?

On that note, are there some particularly good webGL tutorials?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I do:

JavaScript code:
function M(L) {
    return L.join('\n');
}

var fragShader = M([
'void main() {',
'    butts = 1;',
'}',
]);
Whether that's better or worse than shipping it with HTML is up to you.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Could put it in

code:
<script type="my/shader">
shade();
some(more);
</script>
then fetch that from the DOM and use the contents.

Raenir Salazar
Nov 5, 2010

College Slice
Is this a good thread for Blender related questions, particularly pertaining to cloth simulation and 3d animation/rigging?

I got a mesh I got off the internet from Blendswap, it lacks an underlying "person" mesh, it has a face and arms and clothes but nothing underneath. Would cloth simulation on the clothes still work even without a character mesh to pin them to? There's arms and legs, maybe I should pin them to that? I just want to be able to do a simple walk cycle without the clothes looking stupid.

Alternative solution: Is there any way to rig clothes so that the weighted vertexes squish together the lower their weight is?

To better explain, suppose I have a character with a dress and I parent the dress to the leg armature with automatic weights, the more the dress is actually influence by the leg the more accurate I feel it is; but at the extremities(at the waist for example) they have this weird effect where the 'rim' of the clothing is also moved or deformed when I'd prefer the rim of the cloth to be unmoved and the vertices near it to be constrained by those fixed vertices.

Any ideas? For the most part I feel cloth simulation may be overkill and also makes me cry as one of those things that just looks really difficult.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

If nobody can help you you might have more luck in the 3DCG thread, which is more about using the software and all that

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Suspicious Dish posted:

I do:

JavaScript code:
function M(L) {
    return L.join('\n');
}

var fragShader = M([
'void main() {',
'    butts = 1;',
'}',
]);
Whether that's better or worse than shipping it with HTML is up to you.

I can't imagine writing a shader that way. I guess I could write it in a separate file and then transition to that for deployment but in that case I might as well dump it in the html.


Subjunctive posted:

Could put it in

code:
<script type="my/shader">
shade();
some(more);
</script>
then fetch that from the DOM and use the contents.

I'm not sure I understand. Isn't this what I was talking about trying to avoid, or am I missing something here?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

HappyHippo posted:

I'm not sure I understand. Isn't this what I was talking about trying to avoid, or am I missing something here?

Sorry, I thought you meant with escaping or string literals. What don't you like about it? I'm not sure what you're looking for in a solution. You could put it in separate files and XHR to them if you wanted, too.

Tres Burritos
Sep 3, 2009

Suspicious Dish posted:

I do:

JavaScript code:
function M(L) {
    return L.join('\n');
}

var fragShader = M([
'void main() {',
'    butts = 1;',
'}',
]);
Whether that's better or worse than shipping it with HTML is up to you.

That's exactly what Three.js does as well.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Subjunctive posted:

Sorry, I thought you meant with escaping or string literals. What don't you like about it? I'm not sure what you're looking for in a solution. You could put it in separate files and XHR to them if you wanted, too.

Ideally I would like to keep them in their own files. XHR seems the way to go.

pseudorandom name
May 6, 2007

Before you go reinventing the wheel, glTF seems to be the asset format for WebGL.

Raenir Salazar
Nov 5, 2010

College Slice

baka kaba posted:

If nobody can help you you might have more luck in the 3DCG thread, which is more about using the software and all that

Thanks! I'll check it out.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

pseudorandom name posted:

Before you go reinventing the wheel, glTF seems to be the asset format for WebGL.

http://www.gltf.org/

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
So I just posted this thing in the screenshot thread where I used webgl/shaders to plot the julia set (move the mouse around to see the psychedelic colors).

At first I stored the real and imaginary components of z into the red and green channels in the texture, first by scaling to the range (0.0, 1.0), but that didn't offer enough precision. In an attempt to get more precision I tried using the remaining two channels to store an additional 8 bits like so:

code:
vec2 scaled = clamp(vec2(real, imag) / u_scale + .5, 0.0, 1.0);
	
gl_FragColor = vec4(floor(scaled * 256.0) / 256.0, fract(scaled * 256.0));
This mostly worked but I can see a noticeable seam along the real and imaginary axes (look along a vertical/horizontal line through the center of the picture while moving the mouse and you can see it). I'm guessing it has something to do with moving from negative to positive, but I don't see how that would cause a problem after the scaling to the (0.0, 1.0) range. Is there something I'm missing here?

Edit: Opps, I should have used 255 instead of 256. Problem solved!

HappyHippo fucked around with this message at 19:29 on Nov 24, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm about to finish introductory graphics and rendering (two seperate courses) and we have to do a final project. Me and my mate decided to make a joint project for both classes about diffuse reflectance in real-time using the many point lights method with imperfect shadow maps. So far I'm fairly clear on what we have to do (with the exception of some details, but I already found reading material for most of it.) One thing I'm not quite sure about is how you do hemispherical shadow/depth maps. Projecting a point unto a sphere is intuitive enough, but I can't find anywhere that explains how to map an entire triangle to a sphere, so that its edges are mapped as well before they are rasterized for the final depth map. Are you just supposed to accept the approximation offered by mapping the vertices to the sphere and not accounting for edge warping?

E: Come to thin of it, I guess we're already making a gross approximation with the ISMs, so approximating the hemisphere seems like the lesser of two evils there? At any rate, I still feel like I could use a bunch more reading, so if anyone has some links to some good resources on the subject I'd greatly appreciate them.

Joda fucked around with this message at 01:48 on Nov 27, 2014

High Protein
Jul 12, 2009

HappyHippo posted:

So I just posted this thing in the screenshot thread where I used webgl/shaders to plot the julia set (move the mouse around to see the psychedelic colors).

At first I stored the real and imaginary components of z into the red and green channels in the texture, first by scaling to the range (0.0, 1.0), but that didn't offer enough precision. In an attempt to get more precision I tried using the remaining two channels to store an additional 8 bits like so:

code:
vec2 scaled = clamp(vec2(real, imag) / u_scale + .5, 0.0, 1.0);
	
gl_FragColor = vec4(floor(scaled * 256.0) / 256.0, fract(scaled * 256.0));
This mostly worked but I can see a noticeable seam along the real and imaginary axes (look along a vertical/horizontal line through the center of the picture while moving the mouse and you can see it). I'm guessing it has something to do with moving from negative to positive, but I don't see how that would cause a problem after the scaling to the (0.0, 1.0) range. Is there something I'm missing here?

Edit: Opps, I should have used 255 instead of 256. Problem solved!
Can't you just use a two-component 16 bit per component texture format?

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

High Protein posted:

Can't you just use a two-component 16 bit per component texture format?

How would I go about that? I would certainly like to. For texImage2D the as far as I can tell the allowed internal formats are ALPHA, LUMINANCE, LUMINANCE_ALPHA, RGB and RGBA.

High Protein
Jul 12, 2009

HappyHippo posted:

How would I go about that? I would certainly like to. For texImage2D the as far as I can tell the allowed internal formats are ALPHA, LUMINANCE, LUMINANCE_ALPHA, RGB and RGBA.

Hmm yeah, it appears that isn't possible in WebGL, sorry.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

OES_texture_half_float as an extension should let you use float16 values for each channel. I think it's pretty universal on desktop browsers.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
For deferred shading (OpenGL,) how do you get anything other than floats between 0 and 1 into a texture? I'm currently accounting for this discrepancy in my shaders, but had to go with a solution that seems very shady, where I exploit that all vertices in my scene are between -1 and 1 on all axes. Also, are there any easy ways to avoid for artifacts from position map imprecision when you generate your lightmap? I know multisampling the light map is an option, but performance is already an issue with what I'm doing.

This is how the light map looks. The artifacts are most obvious on the small box in the front and the wall to the right. (The scene is Cornell boxes)


E: I figured out the imprecision problem by encoding my scene information in RGBA16F. I still can't figure out how to get values over 1 or under 0 into the texture, though.

Joda fucked around with this message at 16:04 on Dec 5, 2014

Contains Acetone
Aug 22, 2004
DROP IN ANY US MAILBOX, POST PAID BY SECUR-A-KEY

Contains Acetone fucked around with this message at 17:33 on Jun 24, 2020

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Thanks. Looks like I'd unknowingly fixed that problem as well when I increased the position map precision to 16-bit floats :doh:. Figured that I had to set a state somewhere to stop GL from clamping, so never thought to check it.

Joda fucked around with this message at 03:27 on Dec 6, 2014

The_White_Crane
May 10, 2008
SO, I've been trying to use GeDoSaTo (an upsampling post-processing tool for prettifying vidjamagames) and I noticed a bug with it:

One of the post-processing options is a film grain effect. This looks pretty nice and all, but after the game has been running for a few minutes, it will lose its randomness and a set of rotating lines will become apparent amid the grain. Having looked through the code for the shader, I assume the problem is in the random texture generator function it seems to be using:

code:
float4 rnm(in float2 tc) 
{
    float noise =  sin(dot(tc + float2(timer,timer),float2(12.9898,78.233))) * 43758.5453;

    float noiseR =  frac(noise)*2.0-1.0;
    float noiseG =  frac(noise*1.2154)*2.0-1.0; 
    float noiseB =  frac(noise*1.3453)*2.0-1.0;
    float noiseA =  frac(noise*1.3647)*2.0-1.0;
    
    return float4(noiseR,noiseG,noiseB,noiseA);
}
Since the problem doesn't occur initially, but only after the game has been running for a while I figured it must have something to do with the increasing value of the 'timer' variable, which is declared at the beginning of the shader package

code:
const float timer;
and never used anywhere else I can find.

I tried replacing one of the two components of the float2 vector with a fixed value, so

code:
float noise =  sin(dot(tc + float2(timer,100),float2(12.9898,78.233))) * 43758.5453;
This did actually help somewhat, in that it delayed the breakdown of randomness, but it didn't actually fix the problem.

As you might have guessed from this post, I know nothing about HLSL; I've got only the most rudimentary of programming knowledge.

I was wondering though if anyone could: A - find a solution for this, and if practical B - give me a cliff-notes rundown on what was going wrong and why.

Xerophyte
Mar 17, 2008

This space intentionally left blank

The_White_Crane posted:

I was wondering though if anyone could: A - find a solution for this, and if practical B - give me a cliff-notes rundown on what was going wrong and why.

Looks like a floating point precision issue. If tc is in [0,1] or so then tc + float2(timer,timer) is going to drop more and more significant digits of the seed value as timer gets large and the sine will only assume a few discrete values.

What that function seems to essentially be doing is

code:
float4 rnm(in float2 tc) 
{
    float4 rgba = noise4(timer, tc.x, tc.y);
    return rgba;
}
with a home-grown 3d -> 4d hash/noise function that's a bit poo poo. It's based on a very common rand() approximation for shaders (see this Stackoverflow question) that isn't all that good to start but works fine for screenspace xy coordinates. It apparently fails for that particular dimensional expansion and variable range.

I guess the solution is to use a better noise function. There are a bunch of perlin/gradient/simplex libraries around (a random one I googled), but I have no experience with any of them. If you're not familiar with shader languages they might be a bit of a pain, also.

You can try to just replace the initial 3d -> 1d hash with something more well-behaved.
code:
   // Option 1: Include the timer in the dot product and try not to trample the other dimensions.
   float noise =  sin( dot(float3(tc.x, tc.y, timer), float3(12.9898, 78.233, 0.0025216)) ) * 43758.5453;

   // Option 2: Since the 2d function is well behaved for this usecase, compute an entirely separate hash for the timer. Slower but computation is cheap.
   float noise =  sin( dot(tc,float2(12.9898, 78.233)) ) + sin( timer*0.0025216 );
   noise = noise * 43758.5453 * 0.5;
I pulled the additional coordinate out of my rear end, but it should not be a divisor of the other two and be on roughly the scale of 1/timer, essentially.

The_White_Crane
May 10, 2008
Well, your first solution worked perfectly, so thank you for that.

I'm glad I was at least guessing roughly where the problem was occurring, if not why. I must admit, I don't think I have the requisite background knowledge to follow your explanation though. I get the problem of dropping more digits as the timer value increases, and I think I follow how that would cause the sine to gravitate towards a smaller set of values, but I have trouble with the idea of mashing a 4d variable to a 3d one.

When you have a 4d variable, for example, are the different dimensions any specific 'thing' or are they essentially arbitrary? Does the whole set of values evaluate out to a single number somehow? Honestly, until I looked at this stuff I hadn't encountered multidimensional variables before, and my vaguely remembered college maths seems to be inadequate to the task. :blush:

Sorry, I didn't mean to start trying to pump you for a comprehensive explanation of vector maths. I appreciate your effort.

Xerophyte
Mar 17, 2008

This space intentionally left blank

The_White_Crane posted:

Well, your first solution worked perfectly, so thank you for that.

I'm glad I was at least guessing roughly where the problem was occurring, if not why. I must admit, I don't think I have the requisite background knowledge to follow your explanation though. I get the problem of dropping more digits as the timer value increases, and I think I follow how that would cause the sine to gravitate towards a smaller set of values, but I have trouble with the idea of mashing a 4d variable to a 3d one.

When you have a 4d variable, for example, are the different dimensions any specific 'thing' or are they essentially arbitrary? Does the whole set of values evaluate out to a single number somehow? Honestly, until I looked at this stuff I hadn't encountered multidimensional variables before, and my vaguely remembered college maths seems to be inadequate to the task. :blush:

Sorry, I didn't mean to start trying to pump you for a comprehensive explanation of vector maths. I appreciate your effort.

Glad that it worked, although I should perhaps point out that it's still not a very good noise function. :) I'll see if I can explain better.

I'll start with what that code is supposed to be doing, then move on to how it works and finally point out where it fails. Re-reading this I realize it's far too long and I went in to way too much detail. Oh well, it was pretty fun to think about and I wanted to explain it to myself anyhow. Hopefully it'll also make things a bit clearer.


What it is
The intent of that code snippet is to produce 4 floats of noise: arbitrary float values between 0 and 1. A noise function is essentially a specific type of a hash function -- you send in some input and it produces a mostly arbitrary output. If you send in the same input twice you get the same output twice, if you change the input a little bit you get a completely different* output.

I say this 3d to 4d, because the input is 3 floats: the x & y coordinate of the "td" value and the timer value. The output is 4 floats: the 4 rgba components. There's nothing particularly magical about the size of the input and output vectors, but producing more floats from less data tends to be harder. Each float is 32 bits so you're trying to produce 4*32 bits of arbitrary output from 3*32 bits of arbitrary input. There's no way to cover the entire output space. It's still possible to do 1d -> 4d noise, but you need to be a bit careful to make sure that your 32 bits of input get spread over your 128 bits of output in a "good" way. What generally happens when this fails is that you end up with a discernible pattern in the output, which is bad since the entire point of noise is usually to not have a pattern.

In this particular case, we can try to look at how the code is trying to achieve it's random output.

* In this case, where the intent is to be a random number generator (hence the rn in the name I guess). There are other types of noise that are (more) continuous, meaning that a slightly different input just means a slightly different output.


What it does
We'll start with the first line.
float noise = sin(dot(tc + float2(timer,timer),float2(12.9898,78.233))) * 43758.5453;
This takes the 3 input floats and tries to produce an arbitrary float. It's compact but if we expand the expression it's actually doing

float x = (tc.x + timer) * 12.9898;
float y = (tc.y + timer) * 78.233;
float noise = sin(x + y);
noise = noise * 43758.5453;


We're taking the sine of x+y, something that's roughly the size of ~100*timer + 50*tc, which gets us a number between -1 and 1. Then we multiply that with 43758.5453 to get an "arbitrary" float between -43758.5453 and +43758.5453. Because we're multiplying with something large the idea is that if you shift tc just a little bit you'll be changing the sine slightly and, since we're multiplying with 43758.5453, shifting the noise value a whole lot. The intent is that if you just look at the fractional part then you are computing a "random" value for each pixel of the screen, for scenarios where this might be nice (like a full screen grain filter). This works fine for the "original" noise algorithm where you don't have any sort of time included, but here it's breaking down when the timer is large. We'll get to why in a bit.

Finally, we're going to try to take our single large noise float and get 4 new floats between -1 and 1 out of it.
float noiseR = frac(noise)*2.0-1.0;
float noiseG = frac(noise*1.2154)*2.0-1.0;
float noiseB = frac(noise*1.3453)*2.0-1.0;
float noiseA = frac(noise*1.3647)*2.0-1.0;


The frac function just strips the integer part of the float and leaves the fractional part, between [0, 1]. The idea is that if we multiply our large, arbitrary noise value by different values (the 1.2154, 1.3453, etc) then the fractional part is going to change significantly, and we get a basically random value.

For instance, let's say the arbitrary noise float ended up being 1224.86. We'd get
float noiseR = frac(1224.86)*2.0-1.0 = 0.72;
float noiseG = frac(1224.86*1.2154)*2.0-1.0 = 0.389688;
float noiseB = frac(1224.86*1.3453)*2.0-1.0 = 0.608316;
float noiseA = frac(1224.86*1.3647)*2.0-1.0 = 0.132884;


which, hey, looks pretty random. It's not, of course, and the multipliers here are actually really poorly chosen -- much better to go with values that aren't as close to oneanother so that small shifts in the noise value won't affect each component in the same way -- but it's good enough randomness for a convincing grain filter which is what we need.


Why it fails
Why is it breaking down when the timer is large? It's a fairly bad noise function, and noise on the GPU is black magic in the first place, so there are a couple of things that could go wrong. But let's look at the first line since that looks to be the primary issue:

float x = (tc.x + timer) * 12.9898;
float y = (tc.y + timer) * 78.233;
float noise = sin(x + y);.


A float is 32 bits, of which 1 is the sign, 8 are exponent and 23 are the significand/mantissa. Those 23 bits mean that you're working with 8ish significant base-10 digits. If you're adding two floats of very different magnitude then the less significant digits in the smaller one will be ignored; e.g. if you take float(100000) + float(1.000001), you can expect the result to be float(100001). If "tc" is some sort of screen space position in [0,1] and you compute "tc.x + timer" then any dropped digits from tc would mean that this value will be the same for regions of the screen. This only manifests as the timer value becomes large.


How to fix it
The change to computing
float noise = sin( dot(float3(tc.x, tc.y, timer), float3(12.9898, 78.233, 0.0025216)) ) * 43758.5453;

expands to
float x = tc.x * 12.9898;
float y = tc.y * 78.233;
float z = timer * 0.0025216;
float noise = sin(x + y + z) * 43758.5453;


Basically, I'm trying to keep x, y and z at the same magnitude for longer to avoid that particular precision failure. It's still bad, though. It will fail in the same way eventually and the entire sine thing is really not a very good way to get an arbitrary float out of the three inputs in the first place. One obvious improvement is to add a frac to stop the timer part of the sum from getting large.

float x = tc.x * 12.9898;
float y = tc.y * 78.233;
float z = frac(timer * 33.12789); // Now we want this to be "more arbitrary", hence the larger number
float noise = sin(x + y + z) * 43758.5453;

Xerophyte fucked around with this message at 17:28 on Dec 14, 2014

The_White_Crane
May 10, 2008
Wow!

Thank you very much! That's significantly clearer now, actually. I think the big problem I had was getting confused by the dot function; I didn't have much to do with that even when I did study maths (or I've forgotten it entirely) and I didn't understand at all how that was actually being fed its numbers in the code, because I'm so totally unfamiliar with HLSL syntax and functions.

Thanks a bunch; it was great of you to go out of your way to educate me. :love:

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
Apparently the line thickness property in webgl might not work in all browsers - in some browsers all lines will have a thickness of 1. I guess the next best thing is to draw "lines" using quads. I was thinking you could probably do this with a geometry shader, but it seems like the kinda thing someone has probably already done. Google didn't turn up much, is there an implementation of that sort of thing out there already, or am I going to have to roll my own?

Edit: oh I guess webgl doesn't even have geometry shaders, so much for that idea. Any other solution would be appreciated.

HappyHippo fucked around with this message at 00:32 on Dec 17, 2014

Adbot
ADBOT LOVES YOU

Jewel
May 2, 2009

Line thickness is one of the most annoying graphics things in terms of how simple it seems it should be sadly and there's a few papers on it, it's called "line stroking".

Here's a normal opengl extension to do it that, I think, only works on nvidia: https://www.opengl.org/registry/specs/NV/path_rendering.txt

And here's a paper on how it's done if you feel up to implementing it https://hal.inria.fr/hal-00907326/PDF/paper.pdf

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply