Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Colonel J
Jan 3, 2008
I am getting into programming with Three.js and I hope some of you can help me here.
http://jsfiddle.net/fL33x/4/

The big yellow cube behind is a screen and the small yellow sphere is a camera pointing towards the origin. I'm trying to render what this camera sees on the cube but if just gives error glDrawElements: attempt to access out of range vertices in attribute 1 .

The goal of this is making my own shadow map shader; I will render what the second camera sees as a depth map and then shade accordingly in the second pass depending on what the camera positioned at the "light source" sees. Any thoughts on this are welcome, but first I need to get the rendering to texture to work. Line 274 is where it breaks.

thanks so much to anyone who could help me out with this. I could get it working in a previous test case but now I'm rewriting it cleaner and it just won't work and is driving me nuts.
If it helps I am basing myself off this example: http://stemkoski.github.io/Three.js/Camera-Texture.html

Colonel J fucked around with this message at 03:27 on Feb 6, 2014

Adbot
ADBOT LOVES YOU

Colonel J
Jan 3, 2008

HiriseSoftware posted:

Does "cubeGeometry" have texture coordinates? When you're giving the mesh a texture material, it must be expecting some texture coordinates as part of the geometry, and it's getting none, which would cause the error. glDrawElements renders vertices by an array of indexes - it probably found the XYZ, but not the UV.

Edit: I'll admit I don't have any experience with THREE.js but I was fiddling around with your, uh, fiddle, based on some info I found online, but I couldn't get anything to work.

I did see this though: http://stackoverflow.com/questions/16531759/three-js-map-material-causes-webgl-warning

I'm not sure what the problem was in the end. I just revamped my test case and it works, though I can't find what the difference is between the two files. Pretty ridiculous, but eh, I got it sorta working now. Thank you for looking at it though.

I do have another thing that popped up now. Here's my jsfiddle at http://jsfiddle.net/5br8D/4/ , you can move around with click and drag, mouse wheel zooms in and out.

You can see, besides the blue stuff (in the back) there is a screen displaying the shadow map, made using an orthographic view. It's a bit hard to see but it works; basically the stuff over the cube is darker and thus closer to the camera. The shadowMap is contained in a WebGLRenderTarget which I'm trying to pass to my shader as an uniform texture; however it doesn't seem to work, looks like when you get to the shader it's all white pixels. Does anybody have any idea how to pass a WebGLRenderTarget to a shader so you can read the pixel's colors.

I hope I am being clear enough, and that my code is understandable, I can comment more on it if the need is there. I looked around on the internet and this guy at http://stackoverflow.com/questions/18167797/three-js-retrieve-data-from-webglrendertarget-water-sim seems to be close to what I'm looking for but I'm not really sure I understand what's going on his solution. Cheers to anyone who know WebGL / Three.js to make this happen.


EDIT:

So I did some work using the example I gave and I sorta made some progress; here's the updated jsfiddle. http://jsfiddle.net/5br8D/6/

I added a buffer buf1 which takes its image data from the renderer context; it's updated starting at lines 329 in the javascript. buf1 is then passed to the shaders as a sampler2d. If you uncomment line 48 in the html you will draw the scene using the orthographic camera's matrices; then if you uncomment line 84 the colors will be taken from buf1. However it's only 0s, so it seems like the readPixels function doesn't really work, OR the xy coordinates I try to grab the pixel data from are wrong, which I'd be surprised because the projection is correct on screen.

How can readPixels return black pixels? There isn't actually anything black on the shadowmap. Gah, I'm so confused.

More editing, this post is turning out to be loving long and rambling but I guess what I'm right now after is: How do you turn gl_Position into gl_FragCoord for an orthographic camera? I know the shaders do it automatically but I kinda need to do it myself right now to get the shadow map coordinates. Google isn't really helping on this one. Thanks!

Colonel J fucked around with this message at 02:07 on Feb 8, 2014

Colonel J
Jan 3, 2008

HiriseSoftware posted:

I don't have any experience with shadow mapping, but I found this which has a part about calculating the shadow map coordinates:

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

It's something about multiplying a "bias matrix" against the MVP matrix used from the viewpoint of the light. Multiply that result by your model coordinates and you have the coordinate you pass to the texture - XY is for the texture lookup, and Z is used to determine if an object is in shadow or not.

Thanks for your help, I finally got it working. I couldn't really get the bias matrix to work as it would distort my geometry in strange ways. I just multiplied the vertice positions by 0.5 and translated by 0.5 and they're good now.

As for sending the shadow map to the shaders as a uniform, I'll leave the answer here for posterity: you can send a WebGLRenderTarget to a shader as a regular texture and it'll work just fine.

Here's the updated fiddle: http://jsfiddle.net/7b9G8/1/
The yellow sphere is just to represent the directional light vector, It's not the actual light source.

Colonel J fucked around with this message at 21:14 on Feb 11, 2014

Colonel J
Jan 3, 2008

HiriseSoftware posted:

Is the shadowing working correctly as you're intending? At certain points the shadow of the vertical stick should pass over the horizontal ones, but I can see that it's not. Maybe it's an artifact of my older video card, but it doesn't look right to me.

Yeah, that's a bit of a weird behavior that seems tied to the renderDepth of the objects in the scene. I tried playing a bit with it but I didn't spend too much time; basically some stuff renders at the wrong time when it does the shadow map and they turn out to be under things that are in front of them.

I added
yCube.renderDepth = 100;
to try and force it to render last and it's better: http://jsfiddle.net/7b9G8/2/

However I fear it wouldn't hold to every camera orientation and I'm not sure how to fix it for good.

Colonel J
Jan 3, 2008
I'm working with Peter Pike Sloan's paper Stupid Spherical Harmonics Tricks right now, trying to implement a Monte Carlo integrator for the rendering equation.

In appendix A he gives a recurrence equation for calculating the associated Legendre Polynomials:


and then states you increment m in the outer loop and l and the inner, which leads me to believe that to find P(l,m) you have to start with P(0,0) = 1 and work your way up the l's to the one you want, then do P(1,1) and work your way up the l's again, etc.

It works for the l P(l,0) but as soon as I hit P(1,1) it's like the equations up there break. Mathematica tells me P(1,1) = 0 but if you use the equation in the paper you find P(1,1) = (1 - 2*1)*P(0,0) = -1. Who's in the wrong here? (probably me)

Colonel J
Jan 3, 2008
Those are pretty pictures Boz0r, I wish I could help you out but I'm way too much of a beginner.

I'm trying to make a path tracer using WebGL and I just found out GLSL doesn't allow recursive calls. What's the alternative? Hardcode as many shading functions as I want light bounces and call them successively? Thanks.

Colonel J
Jan 3, 2008
Has anyone ever toyed around with Ramamoorthi's article An Efficient Representation for Irradiance Environment Maps ? I have a radiance probe I want to convert to irradiance with the spherical harmonics technique; I've started writing a Python program to implement the technique in the article but I'm pretty sure I'm doing it the wrong thing. If anybody had advice or even an example of implementation I'd be forever grateful!

Colonel J
Jan 3, 2008
I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but :effort:

Colonel J
Jan 3, 2008
I'm trying to finally wrap up my Master's degree, and I'm looking for a scene to try out the technique I'm developing; surprisingly finding a good free scene on Google is a pretty awful process, and I'm not having much luck finding good stuff. I'm looking mainly for an interior type scene, such as an apartment, ideally with a couple rooms and textures / good color contrast between the different surfaces (I'm working on indirect illumination). Something like this : http://tf3dm.com/3d-model/luxury-house-interior-74731.html , but like most scenes I download from this site a bunch of textures are missing and there's not even a .max file.

What do you guys use as a source for quality scenes? There has to be a modeler's community with quality portfolios or something like that... I'd do it myself but I'm not much of an artist :\ thanks a lot!

Colonel J
Jan 3, 2008

Hubis posted:

Can confirm. Crytek Sponza is good, San Miguel is great but it's a f'ing 1.8 GB OBJ file because it's plain text and all the instances are unrolled :cripes:

I've been messing around with Blender trying to find a binary format that supports the required materials, etc. If my blender-fu improves I might even try to remake it with instancing, but my real job keeps getting in the way.

Yeah, I used Crytek Sponza and haven't been getting very good results, and San Miguel is just too big for my limited RAM, when it's loaded in G3D with a tritree I just enter swap hell.

And thanks for this ^^ ! trying these out atm.

Colonel J
Jan 3, 2008

Hubis posted:

what problem are you having with Sponza? it *should* have good materials

I mean that my algorithm isn't working too well.

Colonel J
Jan 3, 2008

Ralith posted:

Can you share anything about your work? I've been reading about realtime GI lately and am quite interested in new developments.

Of course! I'm working on irradiance probes. You've probably heard about them; they're a pretty standard way of approximating GI by sampling the spherical irradiance function passing through discrete points in the scene and interpolating between the samples at runtime. It's pretty straightforward to create an irradiance volume by placing samples along a 3D regular grid and interpolating trilinearly, but that can easily lead to oversampling as your sample set grows fast that way and irradiance tends to vary pretty smoothly (for distant light sources).

I'm basically trying to automatically construct optimal probe sets by minimzing an error function : I take a much larger number of sample points than my final desired probe set size as a reference then compute an error term which is the sum of squared differences between the SH projection coefficients of the ground truth samples and the interpolated samples from my probe set. I can then use the gradient of the probes' SH coeffs to find a direction to move them in which will lower the error term, make them take a step in that direction and continue until I've found a minimum.

By following a sequence of 1) placing a new probe by trying out a bunch of locations (it's precomputation, so I can try as many as I want and it's fast) and keeping the best one followed by 2) a gradient descent pass until I reach a local minima for the current probe set, I'm able to get pretty consistent result that lead to an error term twice as small than 10x the number of probes placed on a trilinear grid.

This sounds like a good result, but honestly I'm not too happy about it; I've been finding out that a lower theoretical error does not necessarily lead to a more pleasant shading. The important thing is smoothness and visual consistency, and the way I'm building the probes doesn't really care about that, it cares about lowering the error term by all means possible and the final shading has obvious flaws. My choice of interpolating between probes by Weighted Nearest Neighbour has advantages, for example I've been able to derive the equations for gradient descent without too much pain as it's all continuous and well-behaved for small probe displacements, unlike a 3d grid in which crossing a cell border introduces a discontinuity.
However I think the disadvantages are greater; the worst thing is that every probe in the structure influences every shading point, which is extremely wrong. I'm thinking I have to separate my scenes in distinct "visibility volumes" which is kind of what they do in the industry, but I'd have rather had a "black box" in which you can feed a polygon soup and get an ideal probe set as output.

Still it depends; let's say you're making a racing game, you're gonna place your probes along the track and having a regular grid is probably not a good idea. You could instead just interpolate linearly between the probes (along track coordinates) and set your coefficients up so that the 3 closest probes have the most influence over the result (really just thinking out loud here). I think in that sort of situation a technique like mine could prove useful for optimal probe placement.

So yeah, not too groundbreaking work but I think I have enough meat for a thesis / good theoretical results but not a usable algorithm in practice. There's lots of things I could have done better in retrospect; for example a strategy of creating an extremely dense grid and removing nodes by keeping the error function as low as possible, creating some sort of octree, could have been a good solution. I don't think what I did is amazing , especially compared to the fancy stuff they're doing in modern games and cutting-edge research, but this was my first foray into CS (as a physics major) and lead to me working in the industry so it's not all bad :)

edit: to redeem myself here's some good modern research on probes, much fancier than what I've got : http://graphics.cs.williams.edu/papers/LightFieldI3D17/

Colonel J fucked around with this message at 14:50 on Mar 13, 2017

Colonel J
Jan 3, 2008

Xerophyte posted:

Hey, it sounds pretty good to me. I used to work in the same office as a team who worked on a lightmap and irradiance probe baker for games (Beast) and better automatic probe placement was always their holy grail. They had an intern who did what sounds like a very similar master's thesis to yours a couple of years back. I think he ended up using doing descent on the vertices of a 3D Delaunay tesselation, but light leaking was a constant problem. He had some heuristics for splitting tetrahedrons that crossed geometry and other boundaries but as I understood it things would get messy for any complex scenes. The thesis is here if curious.

Wow that's amazing, this thesis is one of my main references as there's not too many people who worked on that yet! And yeah light leaking is awful, even in modern AAA games it's all over the place. I'd have rather used tetrahedrons for interpolation as well, but I didn't dare venture into finding the derivatives for the weights; I'll take a look through that thesis again 'cause if that guy went through the trouble of doing it it could be pretty useful to me.

And thanks for the encouragement, I've looked at it so much that at this point all I see is the flaws...

Colonel J
Jan 3, 2008

lord funk posted:

I'm looking for examples of cool / creative fragment shaders. Basically anything that's fun or interesting. Is there a place where people post these? or does anyone have a neat example?

https://www.shadertoy.com/view/Xs2cR1

Colonel J
Jan 3, 2008

Doc Block posted:

Well, if nothing else you need the w coordinate to make the vertex have 4 elements so it can be multiplied against a 4x4 matrix. The real purpose of the w coord IIRC is that it gets used to do the perspective divide. Set it to 1.0 in your vertex data and then don't worry about it.

Any language that can do arrays of floats can do matrix math, you just have to write the code yourself. For C and C++, people either write their own math libraries for vectors and matrices or use something like GLM.

Be wary though that for transforming points w is 1, but for transforming directions (such as normals) you want to set w to 0.

A careful reading of https://learnopengl.com/#!Getting-started/Coordinate-Systems

and

http://www.songho.ca/opengl/gl_projectionmatrix.html

should clear up most of the maths.

Colonel J
Jan 3, 2008
Felt like playing with matrix transforms ( linear and non linear), here's a work in progress : https://www.shadertoy.com/view/WtBXRD

I made it work with both 2D and 4D matrix transforms - allowed me to finally truly understand why you need a 4D matrix for translation.
You can comment / uncomment defines at the top for various features.

Disclaimer : I had no idea how to draw a grid in a pixel shader, so I went with
1) scale/offset the space to the range I want (so (0,0) is in the middle of the screen)
2) if the value of (x - round(x)) is smaller than some epsilon we're on a grid line. (same for y)

To draw the transformed grid lines I proceed by inversion; my reasoning is that for pixel x, after transformation T, it is now at some location x+dx. I can't write to another location as the pixel shader only runs on pixel x (compute shadertoy when!?) so instead, I consider that pixels are in the transformed space, and check if T_inverse(x+dx) makes pixel X lands on a grid line. It made sense when I did it, now I'm a bit confused but it seems to work. Would there be any other way to do it?

It's weird, running on my 2013 Retina Macbook, I get 60 FPS for the 2D case, but it drops down to 25-30 FPS for the 4D case. I guess inverting a 4D matrix at every pixel is too much. At work on a GTX1060 I'm getting 60 FPs in all cases. Is it a Windows/OSX thing or my Macbook GPU just isn't that powerful? I'd be curious to hear perf reports from people here.
As long as the transfo is linear (i.e. not the "fancy matrix" case) the matrix is the same for every pixel - is it possible to do the work just once with Shadertoy? I guess I'd have to do it in some sort of prepass, or just compute the values and hardcode them.

Colonel J
Jan 3, 2008
If you know a bit of programming you could do the learnopengl.com tutorials, the basics are gentle for beginners and it'll show you roughly what's happening in setting up/using a shader.

It's opengl though,setting up a DirectX pipeline will be different but the concepts are roughly similar.

Colonel J
Jan 3, 2008
I really enjoyed this shader deconstruction by Inigo Quilez : https://www.youtube.com/watch?v=Cfe5UQ-1L9Q

Dude is really good, and he makes it quite accessible.

Adbot
ADBOT LOVES YOU

Colonel J
Jan 3, 2008

peepsalot posted:

drat, I've seen some of this guy's demos and web pages which are really informative and impressive. So I'm interested in checking this out eventually, but holy poo poo 6 hours long!

I'm slightly ashamed to say I watched it all and every minute was good.

I especially like the "physically-inspired" approach rather than going for the absolute realism. Seems like you can get more done quicker this way, for a much more artistic result.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply