Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Win8 Hetro Experie posted:

Though it's a bit odd that some of the issues mentioned, like something hogging the GPU or the driver being updated must surely apply in desktop OpenGL as well.
I looked into this a bit further and it looks like it's changed a bit. For a long time, NVIDIA and ATI were lobbying the ARB for an extension to allow for evictable contexts because it was a waste of RAM on Windows, and the ARB told them to get hosed because Windows was the only desktop or workstation OS that would do that. The result was both of them incessantly using non-compliant mapping behavior because storing copies of transient draw resources was incredibly stupid.

Apparently the ARB caved in and as of 2010, you can specify the "reset strategy" when you create a context which determines which of the two behaviors is used, one of which is to lose the context, and the "preserve everything" approach got downgraded to a recommendation.

Adbot
ADBOT LOVES YOU

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
I feel like I'm missing something, resource views just seem kind of convoluted.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb205129(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476900(v=vs.85).aspx#Views

The depth buffer / render target are just 2D textures, so what is the point in making me go through an extra step to create a view out of it before I bind it, instead of just letting me bind it as it was? To use a texture, I'm creating an ID3D11Texture2D just to use it to create a ID3D11ShaderResourceView from it.

Am I missing some greater reason for their existence or what, I feel like it should have enough data to do whatever it needs to do at that point but I dunno.

slovach fucked around with this message at 03:53 on Feb 24, 2013

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm trying to write light refraction in a software raytracer for a course. We were given an algorithm, but it seems my ray is being reflected in addition to being refracted.

code:
refractionColor = Vector3(1.0f) - m_kd;
HitInfo refractionHit;
float my1 = 1, my2 = 1.31;
float costheta1 = dot(hit.N, ray.d);
float costheta2 = sqrt(1 - pow(my1/my2, 2) * (1 - pow(dot(ray.d, hit.N), 2)));

Vector3 vRefract;
if (costheta1 >= 0) {
	vRefract = (my1/my2) * ray.d + ((my1/my2) * costheta1 - costheta2) * hit.N;
} else {
	vRefract = (my1/my2) * ray.d - ((my1/my2) * costheta1 - costheta2) * hit.N;
}

Ray rayRefract = Ray(Vector3(hit.P), vRefract);
	
if(scene.trace(refractionHit, rayRefract, 0.001f, 100.0f)) {
	refractionColor *= refractionHit.material->shade(rayRefract, refractionHit, scene, recDepth - 1);
}
Can anyone understand my code? And can see what I'm doing wrong?

Boz0r fucked around with this message at 12:47 on Feb 25, 2013

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

Can anyone understand my code? And can see what I'm doing wrong?

I suspect ray.d is the direction of travel of the ray, but you seem to be assuming it is the direction from the intersection to the source of the ray when computing costheta1 at least. Consider replacing it with -ray.d.

Once you get the rays to refract you also need to invert the ratio of ior:s for interior rays, as they are transmitting from the dense medium to air rather than the other way 'round. Also consider that 1 - pow(my1/my2, 2) * (1 - pow(dot(ray.d, hit.N), 2)) can be negative, which has a specific physical interpretation. Applying sqrt without checking is asking for trouble.

There are also slightly cheaper ways to compute Snell's, but that's something you shouldn't worry about...

Boz0r
Sep 7, 2006
The Rocketship in action.
And just like that, it works. Thank you very much.

EDIT: Am I wrong in thinking that when costheta1 > 0 the ray is going in, and if costheta < 0, it's going out?

Boz0r fucked around with this message at 12:58 on Feb 25, 2013

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm pretty much having a brain fart as to how to calculate the distance to a plane in doing depth of field raytracing.



Right now my focus is just a set distance from the camera, so the focus plane is curved. I can't figure out how to calculate the correct focus distance. I have the camera direction and the focus distance, so I can define the plane from that, but that's as far as I've gotten.

ani47
Jul 25, 2007
+
I think this is what you're after:


float d = dot(objectWorldPos - camWorldPos, camForwardDir);

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

EDIT: Am I wrong in thinking that when costheta1 > 0 the ray is going in, and if costheta < 0, it's going out?

Yes, this is true. Since we're on the subject, the usual way of dealing with orientation for a refractive material is something like
C++ code:
float NdotI      = dot(hit.N, -ray.d);
bool interiorRay = NdotI < 0.0f;
float3 normal    = interiorRay ? -hit.N : hit.N;
float iorRatio   = interiorRay ? my2/my1 : my1/my2; // why "my", anyhow? Well, whatever works...
and then you can treat both types of transmission identically in the rest of the code.

Boz0r posted:

I'm pretty much having a brain fart as to how to calculate the distance to a plane in doing depth of field raytracing.
I'm not entirely sure I follow why this is something you need for depth of field. Your explanatory image doesn't have any dof, that seems to be a regular pinhole camera where everything is in focus. You get depth of field when your ray origin isn't a single point but you instead have an aperture of some size and shape. Simulating it typically means that for each ray you select some random point on that aperture as the origin and some random point on your "pixel" on the focal plane as a target and cast thataway. Gaussian optics are usually assumed for convenience, but if you have some other focal surface you can project pixels onto then knock yourself out. Codewise you basically get
C++ code:
float3 start = randomPointOnAperture(), stop = randomPointOnFocalPlane();

Ray primaryRay;
primaryRay.origin = start;
primaryRay.d      = normalize(stop - start);
and then you trace enough of those to get the image to converge.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?
This may be a really stupid Direct3D 11 question, but I'm going to ask it anyway:

I'm procedurally generating cube maps, which I'm using to texture spheres. As far as I can tell, my options are to create the texture with cpu access and map each subresource to write my pixels, or do that, but then copy my generated texture to an immutable resource and get rid of the staging texture. Right now I'm doing the latter since I won't need to access or change the texture later. Am I going about this the right way, or is there some third option I'm missing here?

High Protein
Jul 12, 2009

slovach posted:

I feel like I'm missing something, resource views just seem kind of convoluted.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb205129(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476900(v=vs.85).aspx#Views

The depth buffer / render target are just 2D textures, so what is the point in making me go through an extra step to create a view out of it before I bind it, instead of just letting me bind it as it was? To use a texture, I'm creating an ID3D11Texture2D just to use it to create a ID3D11ShaderResourceView from it.

Am I missing some greater reason for their existence or what, I feel like it should have enough data to do whatever it needs to do at that point but I dunno.

When I started programming with D3D10/11 I wondered about the same thing, but now I love the mechanism; you don't use if often, but it can be really helpful. They allow you to look the same bit of data in different ways. It's useful when you've got things like a large buffer you want to look at just a section of, or when you've got some integer format texture and want to read it as normalized data in the pixel shader. Or maybe you want to only use a texture starting from a certain mip level, etc.

High Protein
Jul 12, 2009

Paradoxish posted:

This may be a really stupid Direct3D 11 question, but I'm going to ask it anyway:

I'm procedurally generating cube maps, which I'm using to texture spheres. As far as I can tell, my options are to create the texture with cpu access and map each subresource to write my pixels, or do that, but then copy my generated texture to an immutable resource and get rid of the staging texture. Right now I'm doing the latter since I won't need to access or change the texture later. Am I going about this the right way, or is there some third option I'm missing here?

By copying your generated texture to an immutable resource, do you mean a call like CopyResource()? Because afaik that doesn't accept immutable resources as the target. It works with 'default' resources though.

A more direct option might be to use a CreateTexture2D with a D3D11_SUBRESOURCE_DATA struct and create an immutable texture with the data directly in it. That also allows you to provide your data in a standard array instead of having to create and map a staging texture.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?

High Protein posted:

By copying your generated texture to an immutable resource, do you mean a call like CopyResource()? Because afaik that doesn't accept immutable resources as the target. It works with 'default' resources though.

Yeah, this was just a brain fart. I meant a resource without any cpu access flags, not specifically a resource with the immutable usage.

quote:

A more direct option might be to use a CreateTexture2D with a D3D11_SUBRESOURCE_DATA struct and create an immutable texture with the data directly in it. That also allows you to provide your data in a standard array instead of having to create and map a staging texture.

But this just makes me feel like an idiot. :downs:

I knew the way I was handling this seemed way too convoluted. For some reason I was thinking the array of D3D11_SUBRESOURCE_DATA structs passed into CreateTexture2D was for depth levels in a 3d texture and that it wasn't possible to initialize an array of 2d textures that way. I completely missed the fact that the struct has a SysMemSlicePitch member for 3d textures. Transitioning from D3D9 to D3D11 is really loving with my head, I think. Anyway, thanks!

Jewel
May 2, 2009

A shader question that's less of a real problem I've had and more of a mind exercise for me, but like, say I wanted to make a shader that renders an entire surface as a variable amount of cells/large pixels.

What would be the best way to go about that? The only way I can really think of is: if the cells were 5x5 pixels large, then for each 5x5 cell of pixels, average the colors together and store them as a pixel of a new surface surface_width/cell_width and surface_height/cell_tall.

After that scale that surface back up with nearest neighbor interpolation to the original surface size?

The reason I'm storing them as a pixel of a new surface is that I can't think of another way to do it? It won't really have the data needed to set the top left pixel of a cell to a color until it's reached the bottom right pixel of a cell and averaged out all the colors there.

Will this method work at all? Is this efficient or is there a way better way to do this? I'm not googling or seeing what other people have done as this is more of a mind exercise for me. I've also never made a shader in a while but I do know the gist of how they work.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I don't quite get what you want. Do you want some kind of mosaic/pixelated effect? As a post-processing effect of some other rendering?

The standard way to do that sort of thing is to render to an offscreen framebuffer in a small size with bilinear or bicubic scaling, and upscale back to the original size with nearest neighbor scaling, effectively using the GPU's special-purpose code to do the averages instead of you.

Jewel
May 2, 2009

Suspicious Dish posted:

I don't quite get what you want. Do you want some kind of mosaic/pixelated effect? As a post-processing effect of some other rendering?

The standard way to do that sort of thing is to render to an offscreen framebuffer in a small size with bilinear or bicubic scaling, and upscale back to the original size with nearest neighbor scaling, effectively using the GPU's special-purpose code to do the averages instead of you.

Oh! Interesting. Explain that a little more? I haven't gotten into the actual programming of directX/openGL yet but I am in a few months. Mostly confused about rendering to a small size framebuffer and having it average for you? Just a little hard to wrap my head around is all!

Boz0r
Sep 7, 2006
The Rocketship in action.
Thanks for the help, I got everything working properly now.

In our next assignment we have to pick something to do for ourselves. I thought this lighting reflection refraction stuff was cool, so I thought I'd do more of that. Does anyone have any suggestions for some different cool techniques that are an OK challenge for a beginner/intermediate ray tracing course?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Jewel posted:

Oh! Interesting. Explain that a little more? I haven't gotten into the actual programming of directX/openGL yet but I am in a few months. Mostly confused about rendering to a small size framebuffer and having it average for you? Just a little hard to wrap my head around is all!

Well, it depends on where/when/how this effect will be used.

Jewel
May 2, 2009

Suspicious Dish posted:

Well, it depends on where/when/how this effect will be used.

I'm thinking like, for now just a whole screen effect. Everything on the screen goes mosiac. Useful for interesting stuff like instead of the screen blurring when you got hit it could mosiac for a few moments. Or i could draw an explosion but mosiac only that explosion surface. Cool stuff like that, I guess. So stuff that moves realtime and would have to be updated every frame.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
OK, I only know OpenGL, not Direct3D, but in order to do this sort of stuff, you have to render into an off-screen framebuffer (using glBindFramebuffer and friends), and then integrate that back into the scene by drawing it as a texture. You can add a shader when you draw that texture, in which case you'd simply do the sampling with the texture function.

The trick I was talking about before was doing the averaging on the GPU. As a quick demonstration: Open Photoshop, and go to Image -> Image Size, and resize to 25% with the "Bilinear" resample mode selected, and then resize 400% with the "Nearest Neighbor" resample mode selected.

You can render your scene into some sized texture-backed FBO with a projection matrix that makes everything smaller, and setting GL_TEXTURE_MIN_FILTER set GL_LINEAR. After that, you should have a nicely averaged scene that you can scale up with GL_TEXTURE_MAX_FILTER set to GL_NEAREST. This replicates what we did in Photoshop above.

I don't believe you can do all of the resizing with GLSL alone, unfortunately.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Can anyone suggest a good book covering modern-ish (shader-based) OpenGL programming? It looks like things have changed quite a lot between the release of 3.0 and now so I'm a bit hesitant to just blindly grab the top seller off Amazon.

Thanks!

Xerophyte
Mar 17, 2008

This space intentionally left blank

Suspicious Dish posted:

You can render your scene into some sized texture-backed FBO with a projection matrix that makes everything smaller, and setting GL_TEXTURE_MIN_FILTER set GL_LINEAR. After that, you should have a nicely averaged scene that you can scale up with GL_TEXTURE_MAX_FILTER set to GL_NEAREST. This replicates what we did in Photoshop above.

There's absolutely nothing wrong with this and if you're only doing a mosaic filter it's a good idea. However, if you happen to be already rendering your scene to some intermediate buffers in order to do other post-processing effects like tonemapping, bloom and whatever you might feel like then it might be easier to just include your mosaic in that pipeline. It's especially relevant if you want to do some PP effects before applying the mosaic (bloom, for instance) and some after.

In GLSL the resultant fragment shader would look roughly like:
C++ code:
// FBO texture that you rendered the scene to, possibly with some post-processing done, in the same resolution as the buffer you intend to display
uniform sampler2DRect frameBuffer;

// width, height of your mosaic blocks
uniform vec2 blockSize;

// output color to write to some other framebuffer
out vec4 fragmentColor;

// Assign a single coordinate to each kernel
vec2 mosaicCoord( vec2 inCoord ) 
{
    // Can add 0.5f*blockSize if you want the center sample, but be careful at edges
    return inCoord - mod(inCoord, blockSize); 
}

void main() 
{
    // Use the mosaic lookup to get the same source color for a number of FragCoords
    vec4 framebufferColor = texture(frameBuffer, mosaicCoord(gl_FragCoord.xy));

    // ... and then do whatever other post-processing you feel like in this step
    fragmentColor = somePostProcessingOperators( framebufferColor );
}
If you want a better approximation of the average color of a mosaic block than just a single arbitrary value in it then you can either take some more samples in the shader and use the average of those or mipmap the framebuffer texture and sample at some higher level (requires DX10+ or support for the ARB_texture_non_power_of_two extension in GL, I think) to improve it, but that sort of thing is probably not worth the effort.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Yeah, I considered both abusing mipmapping and the textureLod function, and just punting on the average and just taking one sample from the block, but neither seemed acceptable to me. Especially considering that automatic mipmapping of a screen-sized FBO would be pretty costly on the GPU because it's not just a bilinear downsampling.

If the effect is transient, like the screen transitions in Super Mario World, punting on the averages and only taking one sample per block is probably the easiest way to do it, since that's how Super Mario World did the transitions as well.

PDP-1 posted:

Can anyone suggest a good book covering modern-ish (shader-based) OpenGL programming? It looks like things have changed quite a lot between the release of 3.0 and now so I'm a bit hesitant to just blindly grab the top seller off Amazon.

Thanks!

Look for books on GLES 2.0, as that's a modern subset of GL.

unixbeard
Dec 29, 2004

I haven't really worked with making normals before, and was wondering if there is a way I can calculate them so they are always facing "out"?

I am using GL_TRIANGLES, and my shapes are kinda wacky, e.g. a corkscrew:



The red lines are the normals, so sometimes they are facing out (like on the left most) and sometimes in (the right most).

If i have 4 vertexes making 2 triangles, the way I am currently calculating them is like this

code:
            ofVec3f p1, p2, p3, p4;
            ofVec3f n1, n2, norm;

            p1 = points[iv][iu];
            p2 = points[(iv + 1)][iu];
            p3 = points[iv][(iu + 1)];
            p4 = points[(iv + 1)][(iu + 1)];
            
            n1 = p2 - p1;
            n2 = p3 - p1;
            
            norm = n1.cross(n2).normalize();
            
            this->addVertex(p1);
            this->addNormal(norm);
            this->addVertex(p2);
            this->addNormal(norm);
            this->addVertex(p3);
            this->addNormal(norm);
            
            n1 = p2 - p3;
            n2 = p4 - p3;
            norm = n1.cross(n2).normalize();
            
            this->addVertex(p3);
            this->addNormal(norm);
            this->addVertex(p2);
            this->addNormal(norm);
            this->addVertex(p4);
            this->addNormal(norm);
Where "this" is a mesh container (specifically ofMesh), and points is a 2d array of points that has been filled in by the geometry function

code:
ofVec3f
Corkscrew(float u, float v) {
    float x = cos(u) * cos(v);
    float y = sin(u) * cos(v);
    float z = sin(v) + u;
    
    return ofVec3f(x, y, z);
}
and u/v range from -PI to PI

unixbeard
Dec 29, 2004

Actually I'm not sure if this is even possible with the way I have things, cause "out" will be relative to some other face. If I define out as "away from (0, 0, 0)" it would not be correct for something like a torus.

[edit] I still think there is something wrong with the way I am calculating them though. They will be correct for a sphere but facing the wrong way for a torus.

unixbeard fucked around with this message at 11:58 on Mar 10, 2013

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Is there a proper (or any) way to drop the upper mipmaps from a texture in OpenGL? That is, I want to be able to drop the more detailed mipmap levels of textures that are only visible far away and stream them back in if they become visible again.

I know GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL define the valid range of mipmaps, but does changing them cause mipmaps outside of the defined range to be discarded, or are they required to be preserved in the event that the range changes back?

Safe and Secure!
Jun 14, 2008

OFFICIAL SA THREAD RUINER
SPRING 2013

unixbeard posted:

I haven't really worked with making normals before, and was wondering if there is a way I can calculate them so they are always facing "out"?

Doesn't OpenGL do some stuff to decide which is the "front" face by taking into account the order (clockwise or CCW?) of the vertices that make up a triangle? Is there some way you can use that information to choose your

code:
n1 = p2 - p1;
n2 = p3 - p1;
Such that their cross-product will point in the "front" direction?

Xerophyte
Mar 17, 2008

This space intentionally left blank

unixbeard posted:

Actually I'm not sure if this is even possible with the way I have things, cause "out" will be relative to some other face. If I define out as "away from (0, 0, 0)" it would not be correct for something like a torus.

[edit] I still think there is something wrong with the way I am calculating them though. They will be correct for a sphere but facing the wrong way for a torus.

"Out" is a pretty nebulous concept for a triangle. What you're doing now will always generate a face normal pointing away from the clockwise face of the triangle, which is fine if your triangles are all clockwise face front. The default setting for glFrontFace is actually GL_CCW, but you can change that to GL_CW so no worries.

The problem is more that your parameterization of the surface generates both clockwise and counter-clockwise faces on the outside of the surface, which is probably not a good idea. I'm guessing that happens when sign(u*v) is negative or somesuch and you can fix by changing to return ofVec3f(x, z, y); for an opposite-oriented triangle in the problematic case. I'm really not sure on the condition, though.

E: Err. What I meant was that one solution is to swizzle the points when generating triangles that are incorrectly oriented but since I'm an idiot I wrote the coordinates instead. Swizzling the coordinates is obviously a really bad idea.

Xerophyte fucked around with this message at 18:13 on Mar 15, 2013

Max Facetime
Apr 18, 2009

OneEightHundred posted:

Is there a proper (or any) way to drop the upper mipmaps from a texture in OpenGL? That is, I want to be able to drop the more detailed mipmap levels of textures that are only visible far away and stream them back in if they become visible again.

I know GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL define the valid range of mipmaps, but does changing them cause mipmaps outside of the defined range to be discarded, or are they required to be preserved in the event that the range changes back?

Yes, the storage has to remain as there lots of other ways that it could be accessed still, in addition to changing the texture level parameters back. The driver could swap it out to main memory on its own, but that wouldn't buy you anything, because OpenGL doesn't have any way of signaling you that the driver could use if it needed to swap it back in later from main memory but there wasn't enough resources available to do so.

I think there's two ways to do this. A cleaner way would be to recreate the texture with less mipmap levels allocated. glCopyImageSubData​() can be used to speed this up.

A dirtier way would be to change the size of the allocated texture to [0,0] for the levels you don't need with a call like glTexImage2D(target​, level​, internalFormat​, 0, 0, 0, format​, type​, NULL);

As long as the texture remains "complete" you should be OK, but I don't know what texture completeness means precisely. Here's where I'm getting this: http://www.opengl.org/wiki/Texture_Storage#Mutable_storage

unixbeard
Dec 29, 2004

Thank you both. Xero I think you were right about it having CW and CCW triangles. If I color each vertex/face using this http://gamedev.stackexchange.com/questions/30537/how-to-determine-counter-clockwise-vertex-winding some of them certainly are the other way. Which is lame, I thought winding might be an issue so I checked, but only checked the first triangle not the whole mesh. I still don't know how you picked that up.

I still end up with the normals inside for some shapes and outside for other shapes. If I have a sphere they are on the outside, but if I have a torus they are inside and the lighting is flipped. What I am doing is a port of a processing library, and I ripped the way it auto-calculates the normals from processing and still get the same thing. I don't know what is going on. It's not really a big thing for me, I'll just flip the normals for the ones I know need it. It's irritating on a personal level that I couldn't figure out what is going on but oh well.

Here are some pictures though, the torus has normals but they are inside. The lighting is correct (coming from the right) inside the torus, which is why the left side of the torus is lit. The green triangles are the ones that appeared to be CW.


Xerophyte
Mar 17, 2008

This space intentionally left blank

unixbeard posted:

Thank you both. Xero I think you were right about it having CW and CCW triangles. If I color each vertex/face using this http://gamedev.stackexchange.com/qu...-vertex-winding some of them certainly are the other way. Which is lame, I thought winding might be an issue so I checked, but only checked the first triangle not the whole mesh. I still don't know how you picked that up.

I still end up with the normals inside for some shapes and outside for other shapes. If I have a sphere they are on the outside, but if I have a torus they are inside and the lighting is flipped. What I am doing is a port of a processing library, and I ripped the way it auto-calculates the normals from processing and still get the same thing. I don't know what is going on. It's not really a big thing for me, I'll just flip the normals for the ones I know need it. It's irritating on a personal level that I couldn't figure out what is going on but oh well.

I realize this wasn't really a question, but anyhow: there's not that much to figure out, I think. There's no way to algorithmically deduce which choice of surface normal is the correct or intuitive one for "out" on an arbitrary surface. There are surfaces that are non-orientable and can't be assigned an "out" and "in" direction -- Möbius strips, Klein bottles and anything topographically equivalent. I'm guessing that the Processing library assumes or checks that the surface has an orientable topology, then picks an orientation more or less at random and makes all the normals agree with that. Maybe there's a clever heuristic that'll get the result the user wants more often than not but there's just no way for it to determine which orientation would be most desirable for the arbitrary case -- that's up to whoever tessellates the faces and normals in the first place.

It can be possible to get around a mismatched vertex winding by specifying the actual normal at each vertex in some clever way rather than using the face normal, assuming you want or at least don't mind smooth shading of the surface. For instance, given a sphere with midpoint c and a parametrization of its surface p(u,v) you can calculate the normal at p as n(u,v) = normalize(p(u,v)-c). That'll be correct regardless of how and in what order you tessellate the sphere. The general way of getting a parametrization of the normal at p(u,v) is from the partial derivatives as n(u,v) = normalize(cross( dp(u,v)/du, dp(u,v)/dv ) ) which I think will be consistent iff. the parametrization is regular everywhere on the surface. This is less useful if you want per-face shading or want backface culling to work.

Basically, specifying the tessellation in such a way that the vertex winding is consistent and agrees with the surface orientation you're ultimately after makes life much easier.

unixbeard
Dec 29, 2004

Hey thanks man, I appreciate the discussion.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm trying to do path tracing in my ray tracer and I'm supposed to use Monte Carlo integration to determine the directions of my random vectors. I don't really get Monte Carlo integration, or how I should use it in my program. Could someone explain it to me as though I was retarded?

Xerophyte
Mar 17, 2008

This space intentionally left blank

Boz0r posted:

I'm trying to do path tracing in my ray tracer and I'm supposed to use Monte Carlo integration to determine the directions of my random vectors. I don't really get Monte Carlo integration, or how I should use it in my program. Could someone explain it to me as though I was retarded?

For some posts I really wish that [latex] was a thing we could do. Oh well. I'm not sure if this is "as though you were retarded" but I'll take a stab at it.

First of all Monte Carlo integration does not do anything to determine the random directions of your vectors. You can actually do that almost however you'd like, but I'm getting ahead of myself. Monte Carlo integration as a principle is fairly simple. Say you want to integrate some R -> R function f(x) on the interval [0,1], but you don't actually know much about f. Instead the only thing you can do with f is calculate values. One way to approximate the integral (assuming some nice properties like f is bounded) is to guess that the function doesn't change much and take a uniformly distributed random value x0 in [0,1], calculate f(x0) and then consider that to be representative of the entire interval. If you do that repeatedly and keep a running average then that will approach the actual value of the integral of f over [0,1]:



If you have a larger range, say [0,k] then can still do the same thing but you need to multiply the weight of each sample f(xi) by the length of the range k to compensate for the fact that the samples are now that much more sparsely spaced. This idea of compensating for sparseness can be made more general. I've said uniformly distributed samples xi, but Monte Carlo integration works for any distribution that actually covers the interval -- you just need to weight by the inverse of the probability density. For a uniform distribution on [0,k] the pdf happens to be 1/k everywhere but in general you can formulate it as



where the integration is over the range of whatever distribution you're sampling. There's also no requirement that f is R->R.

How does it apply to ray tracing? Well, in ray tracing whenever you follow a ray to some intersection point p you're going be interested in the outgoing light Lo (an RGB value, typically) from p along that ray. Doing that involves solving Kajiya's rendering equation



where Le is emitted light, fs the BSDF of your material and Li calculates incoming light from a certain direction. The integral in the rendering equation is generally not solvable analytically -- but what we can do is select a random direction ωi and sample the value of the integrand in that direction and then guess that this is representative of the entire thing. This allows us to estimate the value of the integral by Monte Carlo integration. Whenever we follow a ray to an intersection and need to evaluate the rendering equation to determine the incoming radiance, we randomly select one outgoing direction and evaluate the incoming radiance in that direction by sending a ray. This strategy means that when a primary ray sent from the camera intersects something we always send out one new ray from the point of intersection. The sequence of rays form a single path in space, hence path tracing. Evaluating the Kajiya expression simplifies to



for one randomly chosen direction ωi. To calculate one sample for one pixel you cast a ray and traverse its random path. During the traversal you track the current path weight as the product of all the fs/pdf terms you're coming across, accumulating light contributions from emissions as you encounter them. You generally stop the path traversal when the path weight is below some threshold or you've reached some recursion depth you think is sufficiently deep. Keep accumulating contributions for each pixel and the path tracer will slowly converge to the correct color value for that pixel.

Things to note:
- The "best" way to deal with light sources in a path tracer in the sense that it's the most like physical reality is to let your materials have an emission property and let your environment (where non-intersecting rays end up) be represented by some function that takes an outgoing ray direction and returns a radiance value. If you want to support abstract point/area lights and directional lights you can still send shadow rays towards them at each intersection along the path and add their contributions to your accumulated value.
- It very much matters how you randomly select your outgoing directions as they are weighted less the more likely they are. Constructing a uniform sampling over a sphere or hemisphere isn't entirely trivial. Sampling in a "good" way can simplify the calculation, for instance using something called cosine weighted hemisphere sampling results in pdf(ωi) = dot(ωi,n) which causes those two terms to cancel out. That said it can be a good idea to ignore do term cancellations and hold onto the separate values as they can be useful for other optimizations down the line, notably multiple importance sampling.
- How you select samples also matters for convergence, as you want to sample more in the directions where there is more light contribution. Knowing what directions are more contributing is non-trivial since exactly how the radiance distribution looks is what we're doing all this sampling to find out...

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Xerophyte posted:

Constructing a uniform sampling over a sphere or hemisphere isn't entirely trivial.
Uniform spherical (for hemispherical, just flip it if the dot product of the direction and the hemisphere plane is negative):
code:
inline Math::FVec3 Randomizer::RandomDirectionHigh(Float32 distance)
{
    Float64 d = RandomDouble() * 2.0 * Math::Pi_k;
    Float64 s = sin(d) * Float64(distance);
    Float64 c = cos(d) * Float64(distance);

    Float64 u = RandomDouble() * 2.0 - 1.0;
    Float64 n = sqrt(1.0 - u*u);

    return Math::FVec3(Float32(n * c), Float32(n * s), Float32(Float64(distance) * u));
}
Preweighted with cosine distribution (can be simplified into a generate/rotate but whatever):
code:
inline Math::FVec3 Randomizer::RandomDirectionLambert(const Math::FVec3 &normal)
{
    Math::FVec3 side(normal[2], -normal[0], normal[1]);
    side = (side - normal * side.DotProduct(normal)).Normalize2();
    Math::FVec3 side2 = side.Cross(normal);

    Float32 rf = RandomFloat();
    Float32 u = sqrtf(rf);          // Square root = rate of photons at N = proportional to N
    Float32 n = sqrtf(1.0f - rf);

    Math::Angle<Float32> theta(2.0f * Float32(Math::Pi_k) * RandomFloatNot1());

    return side * (n * theta.Cos()) +
        side2 * (n * theta.Sin()) +
        normal * u;
}
One other critical thing to keep in mind is what you're sampling and what kind of artifacts you'll get from it. If you use random directions for every sampling point, then you'll get noise artifacts, and if you use the same directions for every sampling point, then you'll get banding artifacts. Both of those can substantially increase the number of iterations it'll take to converge on a result that looks OK. Noise artifacts are less visible with high-frequency results, banding artifacts are less visible with low-frequency results.

Most algorithms that use Monte Carlo sampling have some other way to distribute the error out that makes it less noticeable, i.e. the reason photon mapping has separate shoot and gather phases is that either by itself would produce a very noisy result with localized discontinuities, but doing both causes discontinuities to be distributed to the point that they're not noticeable.

OneEightHundred fucked around with this message at 20:36 on Mar 19, 2013

Schmerm
Sep 1, 2000
College Slice
I am trying to reduce the number of texture bind operations in OpenGL. This is something that's encouraged, yes? First, I'm already using a texture atlas. Woo hoo.

At some point, this atlas texture is gonna get too big and I'm going to have to start dumping textures into a second/third/fourth atlas and switching back and forth between them with BindTexture. Boo hoo.

But modern video cards have this wonderful plethora of texture image units, each with their own texture binding! What if I bind each atlas to its own texture unit, and thus never have to call BindTexture ever again? Now, BindTexture calls are effectively replaced with lots and lots of glUniform calls to tell my fragment shader's sampler2D objects which texture unit to sample from (I am using GL 2.1 and GLSL 1.20, by the way).

Soo.. how expensive are these calls, relatively speaking? Also, can I be lazy and not sort my geometry by atlas#, thus calling glUniform many many times per frame?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Profile it. Modern implementations of APIs like OpenGL lie to us all over the place to make stuff go fast. I can make an educated guess, but my knowledge is a generation or two out of date. It's possible that it's giant difference, or not a difference at all.

Intel, NVIDIA and AMD all provide excellent profiling tools for their hardware. They're there so you don't have to blindly guess.

Contains Acetone
Aug 22, 2004
DROP IN ANY US MAILBOX, POST PAID BY SECUR-A-KEY

Contains Acetone fucked around with this message at 17:41 on Jun 24, 2020

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Schmerm posted:

Now, BindTexture calls are effectively replaced with lots and lots of glUniform calls to tell my fragment shader's sampler2D objects which texture unit to sample from (I am using GL 2.1 and GLSL 1.20, by the way).
Seconding calls to profile it, but I highly doubt that it'll make a difference because the driver probably doesn't care that much which way you used to tell it to fetch from a different texture, it cares that you changed which texture you're using.

Generally, changing anything about what resources data is retrieved from and how they're used (i.e. shaders) is pretty expensive. Spamming out draw calls from the same bound resources while not changing anything is cheap (in D3D10 and OpenGL at least, D3D9 not so much).

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Well, multitexturing has to work somehow, so there are different texture units on the card itself.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suspicious Dish posted:

Well, multitexturing has to work somehow, so there are different texture units on the card itself.
The separation of samplers and texture units isn't because it's cheaper to switch texture units than to switch textures, it's because the texture units can contain state related to accessing the texture that isn't part of the texture itself (i.e. via the glTexEnv settings). That matters more in D3D (which has clamp/mipmap stuff in the samplers) than OpenGL, but it's the same reasoning.

Rebinding samplers is still changing how the GPU is going to access the texture data though, so it's probably going to incur similar costs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply