Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe
Question for the gurus: I'm working on a writing little engine, using deferred shading. I've got it rendering to 3 FP16 color attachment textures + depth texture g-buffer; and I've got it rendering some crappy point lights, so that's all good.

So I'm basically new to OpenGL, and writing this from scratch, and my main question at this point is around the depth component of my g-buffer. I've got a GL_DEPTH_COMPONENT32 texture attached to depth on my g-buffer, but I don't really know how I'm 'supposed' to work with it.

1. Specifically, If I bind the texture, how do I access the value in a shader reasonably? texture2D returns 4 byte values, do I have to composite them manually into a single 32-bit value? Is there some easy GLSL way to access a 32-bit value from a FP texture that I'm missing?

2. What I really want to do is to re-use the depth stored in my g-buffer during my lighting pass, in my destination framebuffer, since I'm rendering my light volumes as little cubes. If I could enable depth testing, I could early out the pixels without running the fragment shader. I'd like to not have to do a new depth-only render for my framebuffer, since it seems wasteful since I have a depth texture already generated for my g-buffer.

I'm not sure what the 'right' way (or if there is a way) to re-use the depth texture from my g-buffer as depth in my framebuffer (or some other FBO) is. Could I use a full-screen quad and a fragment shader to load depth values from my g-buffer's depth texture into my frame buffer somehow? I also store the fragment z in my g-buffer, could I manually load the depth from that, or is the fragment depth implementation dependent?

I'll figure it out eventually, but maybe you guys could reduce the head-bashing. Appreciate it!

(Last time I wrote graphics code was in the 90s, writing a software rasterizer in the days before hardware acceleration; poo poo sure is fast now, no joke! You driver authors/hardware guys are great, I don't have to do any of the hard work anymore. :))

e: Minor question: Does noperspective not work on ATI cards or something? I'm getting texture coordinates for sampling my gbuffer by doing an (xy/w) for the verticies of my light cubes. I was having distortion due to perpsective correct interpolation to the fragments, but noperspective fixed that on my NVidia card. The distortion still happens on an ATI card someone tested on, even though it does't throw an error building the shaders with noperspective in them:

noperspective out vec2 vOutTexturePos;

Unormal fucked around with this message at 16:43 on Feb 14, 2011

Adbot
ADBOT LOVES YOU

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

Don't divide by W to undo perspective correction, multiply by it.

Well, but the code below works perfectly, except for very minor distortion in vOutTexturePos caused by perspective-correct interpolation of those values to the fragment shader. On an NVidia card where vOutTexturePos interpolation respects "noperspective", it works perfect. vOutTexture pos is correctly the x,y coordinate of the g-buffer texel I want to sample for any point represented in world space by vVertex. (mMVP is my model-view-projection matrix). On an ATI card I tested on, I get very minor distortion, which is reproduced exactially on an NVidia card by removing noperspective, allowing it to interpolate vOutTexturePos perspective-correct. So it seems the ATI driver isn't respecting noperspective in this case.

code:
noperspective out vec2 vOutTexturePos;
...
vOutVertex = mMVP * vVertex;
vOutTexturePos.xy =  (vOutVertex.xy/vOutVertex.w);
vOutTexturePos.xy = (vOutTexturePos.xy*0.5)+vec2(0.5,0.5);
If I change it to multiply by W, it doesn't seem to work at all. It's certainly possible I'm not understanding something here. :)

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

Well, I only have limited experience with deferred so take it with a grain of salt, but I'm pretty sure deferred stuff is usually done with screen-space quads anyway, or even full-screen quads with scissoring, and 3D stuff is just used to apply stencil buffer values.

Sure, billboarding the light volumes instead of using cubes or scissoring would be a workaround that should work fine on both, since a billboard would have no perspective to correct for; Cubes seems like a perfectly reasonable implementation method if ATI would just respect noperspective though :) It seems to me (newb that I am), that rendering depth-tested cubes would allow me to not have to run the fragment shader at all, instead of running the fragment shader on the light billboard, just to discard it because it's 'behind' a pixel. It seems way better to let the depth test do it.

However, if I do more extensive post-processing, like blending in transparent objects, it'd be nice to be able to re-use my depth-buffer, so using 3d for light volumes aside, if anyone has answers for the depth buffer questions, I'm all ears :)

Unormal fucked around with this message at 02:12 on Feb 15, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

I believe what is normally done is using stencil volumes, i.e. render the back of a sphere or cube with stencil on increment, then render the front with decrement, and then draw with stencil test on things above 0.

This has the additional effect of only shading pixels lit by the light, as opposed to doing it in 3D, which shades any pixel where line of sight hit the light volume before hitting a solid (even though it may have exited the light volume and hit a solid behind it), and it's basically required for doing directional projections.

Ah right, that makes sense!

Pretty new to stencil volumes, but was just reading about shadowing using stencil volumes tonight. Seems to be a pretty reasonable approach.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

I hosed up that description because I haven't done it in a while.

Disable color/depth write, set depth test to only render behind the target pixels, set stencil to increment, draw the back of the light cube/sphere, set stencil to decrement, draw the front of it, then re-enable color write and draw your lighting shader in screen space with stencil test set to only draw for stencil values above zero.

If you're good with screen-space partitioning (i.e. you can prevent lights from overlapping on screen) then you can do this with multiple lights at once.

e: Actually you can do this with stencil XOR too, since you're not dealing with intersected volumes.

Yeah, your initial description was enough to get my light bulb to go off. I've got it rendering a couple thousand totally dynamic lights in a big outdoor scene at the moment, so I'll approach it pretty generically, rather than doing preliminary culling. Though I guess I could throw them all in an octree or something, I dunno how much CPU that would use as they move. (I don't have a good instinctual feeling for how much poo poo CPU's and GPU's can do in graphical scenes these days, but it's *alot* and the cpu seems to be the :downs: step-child of a modern GPU). I'll probably just use a much-simplified shader on far away lights to keep the fill rate issues down.

e: Though thinking about it, I need the depth for the stencil volume rendering as described here, so my initial question of if I can re-use my g-buffer depth texture in some intelligent way still stands. Or do I just need to do a fast depth-only pass on my final target framebuffer first? (That seems wasteful). [also since my volumes are simple cubes, at least for my point lights, XOR stenciling seems like it'd work fine]

Unormal fucked around with this message at 03:02 on Feb 15, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

As for depth buffer reuse, a pretty common approach with deferred rendering is using the depth buffer, combined with the projection matrix and screen-space coordinates, to determine what the world-space coordinates are for a pixel without explicitly storing it.


Right, that's the plan, just curious how I'm 'supposed' to be doing the depth buffer extration. I tried for a half hour or so after I got my dorky implementation working by storing it expliclity but couldn't figure out an easy way, so I'll just go bash on it till I figure it out. :)

Thanks for your help, I'll tinker some more and see how it goes. The coversation at least shows me I'm tracking on a path that makes overall sense, just have to hammer out the details.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe
Figured I'd post some of my results for posterity as I figure them out.

So for depth value sampling, just attaching the depth component as a texture and using the r channel of the sampler2D worked fine.

So you can calculate and display a greyscale linear depth with code like this:

code:
float n = 0.01; // camera z near
float f = 512.0; // camera z far
float v = texture2D(tFrameDepth, vTexCoord.xy).r;
float c = (2.0 * n) / (f + n - v * (f - n));
vFragColor = vec4(c,c,c,0);
I've found lots of reference to detaching the depth texture and attaching just it to a new FBO, saying that it should work, but I haven't gotten it to yet. That's probably next on the docket.

E: So it seems the depth attachment works fine, just was trying to do it to my "final" window framebuffer but that's apparently a no-no. As long as I create an intermediate framebuffer to do my blending in, I can detach the depth texture attachment from my geometry FBO and attach it to my compositing/lighting FBO for the lighting stage and it works fine.

code:
// unbind the depth texture from the current FBO
Gl.glFramebufferTexture2DEXT(Gl.GL_FRAMEBUFFER_EXT, Gl.GL_DEPTH_ATTACHMENT_EXT, Gl.GL_TEXTURE_2D, 0, 0);
// Bind the compositing buffer
LightingBuffer.Bind(); 
// unbind anything bound to the lighting FBO
Gl.glFramebufferTexture2DEXT(Gl.GL_FRAMEBUFFER_EXT, Gl.GL_DEPTH_ATTACHMENT_EXT, Gl.GL_TEXTURE_2D, 0, 0);
// bind the main buffer's depth to the lighting depth attachment, viola
Gl.glFramebufferTexture2DEXT(Gl.GL_FRAMEBUFFER_EXT, Gl.GL_DEPTH_ATTACHMENT_EXT, Gl.GL_TEXTURE_2D, MainBuffer.fboDepth, 0);

Unormal fucked around with this message at 17:54 on Feb 16, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

Spite posted:

FBO 0 isn't actually an object - it's the system drawable. So attaching things to it may have...odd effects (though it should just throw an error).

One quick note about CPU overhead: OpenGL is pretty bad about CPU overhead as well. Using 3.1+ will mitigate this, as they removed a bunch of junk. But anything earlier (ie, anything that still has fixed function, etc) will require validation of all that legacy state which sucks rear end.

Am I using anything fixed function here? I figured since I'm entirely shader driven I was bypassing the 'fixed' pipeline, though I don't really know how using the framebuffer EXT functions vs the builtin framebuffer functions for 3.x would effect things. I guess I figured driver writers would just implement the EXT functions as special cases of the more general 3.0 functionality, and EXT would actually be more portable, even though the opengl pages tell you to use the more updated built in functions if you can.

The only thing that feels 'built in/fixed' to me is using the OpenGL blend mode to render each deferred light into the intermediary buffer. That feels a little more auto-magic then the rest of the rendering I do, which is much more manual-direct-writes via shaders. Though I guess there's alot of magic going on under there anyway. I can't figure out any way I could do the blending manually other than having 2 FBOs and swaping them each time I render a new light, which seems ridiculous and I can't imagine would be speedier than just using the built in blend, though I haven't actually benchmarked it.

E: Is there any kind of good comprehensive guide to mainstream video cards and their capabilities in terms of OpenGL? (i.e. how many render targets they support, how many texture units, etc?)

E2: VV Nice thanks!

Unormal fucked around with this message at 02:17 on Feb 17, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

octoroon posted:

So I'm trying to render the geometry in my VBOs with as few draw calls as possible. Most of the geometry shares texture coordinates but uses different textures. So I was thinking I could just sew all the textures together into one big texture and alter the texture coordinates accordingly so that I wouldn't have to bind new textures very often.

The problem I'm seeing is that generating mipmaps for one big texture causes artifacts when parts of the texture are used for different pieces of geometry. Is there any way to get around that?

Would it be better to just suck it up and bind a lot of textures?

The simplest way is to put "padding" around each of the individual textures. In some cases I'll extend the outermost pixels of each texture a little so the mipmap gathers them. It's very inelegant but it works. I'm pretty sure you can also manually generate the mip map levels, but I haven't actually tried it yet.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

roomforthetuna posted:

Another thing that might help with the mipmap problem of a texture atlas would be to have sub-textures (or at least boundaries) that are reasonably high powers of 2 in size - if your boundaries are on the 16 pixel line then at least the 1/2, 1/4, 1/8 and 1/16 scaled versions won't have any bleed from the next subtexture over. (And at the 1/32 level I doubt it really matters that much.)

I have not tried this so I could be wrong.

Even with 128x128 or 256x256 square textures, I get mipmap artifacts if the textures are directly adjacent in an atlas with the automatic opengl mipmap generation on an nvidia card.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

octoroon posted:

Thanks! This really helped, I've got my atlases working rather nicely with mipmapping now.

What'd you end up doing?

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

brian posted:

Roger dodger thanks chaps, onto Z bufferin'!

It's been a long time since I wrote a software renderer, but one of the big costs back in the day was clearning the z-buffer every frame.

A dumb trick that used to really speed it up:

Use only a "chunk", say half of your z-buffer resolution each time, adding 0.5 increment to each write every other frame, and only clear every other frame, or a quarter of your resolution and add 0.25 each frame and clear every 4 frames, for example.

You lost some resolution, but it would make z-buffering go from "not fast enough for real-time" to "totally fast enough", so it was worth it back in the mode 13 320x200 days.

Though I'm not sure if clearing the z-buffer is a significant hit these days.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

ShinAli posted:

I don't know if I asked before in this thread but I'll go ahead.

How would I go about making multiple lights in my phong lighting shader? What I've done is have a uniform array attribute which I pass lighting information like position/direction/size/type of a fixed size (say 100) and loop through about a 100 times through the array. From this thread I've heard that a variable loop is pretty bad so I kept it fixed at 100 and put in an if statement to see if the current element is enabled. If there are more than a 100 lights, I just render the scene again with the lights it didn't go through and blend it with the previous rendered scene.

I'm not sure if this is the right way, and if you guys want I'll put up the source code. It has some if statements in it anyways and I'm not sure how well shaders handle branching.

I'd also liked to know how to handle attenuation of spot lights as everywhere I looked, they seem to use fixed values. I assumed I'd just linearly make less light depending on the distance but went with the fixed attenuation values.

If you can give up blended transparency look into using 'deferred shading', it's complicated but oh so good for lots of lights. (and there's ways to get transparency back if you really want it)

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

ShinAli posted:

That's exactly what I've been using, and I seem to be able to go to about a 1000 lights before it slows down below 30 fps. I just don't know if I'm doing it right.

Generally if you're using a deferred shader, you shouldn't be branching in a single shader, you should be rendering a single quad (or sphere or whatever) per light volume.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

roomforthetuna posted:

Aha, thank you! I had never understood what the w coordinate was about, and that explains it perfectly.

Oh, except wait, if the thing distorting the screen-space result is Output.Position.w, then why does changing Output.Position.z after transforms were applied still change the size?

.w and .z map to the same value; It's just called w sometimes to differentiate coordinate systems more clearly. (afaik)

Unormal fucked around with this message at 16:05 on Sep 9, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

FlyingDodo posted:

I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.

I don't think this is the best way to approach it; but if you're going to do it this way, the classic way would be a reference count to some external object, incremented when something uses it, and decremented when something releases it, and only actually actually release the resources when the reference (use) count reaches 0.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

AntiPseudonym posted:

I think the best solution to your problem is to not actually allow copies of the object at all, as it doesn't make sense to have more than one object in memory for the same resource. You're much better off just using smart pointers or having an object manager that controls when objects are deleted (Say on a level change).


I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array?

Keeping an ID rather than just a pointer just seems like it's adding an extra unnecessary layer of indirection, since getting accessing simple information has to be done through the singleton rather than just simply using a method on the object itself. Not to mention with pointers you wouldn't have to worry about any of the ID management you have to deal with at the moment. Plus using smart pointers is generally more reliable than manually updating the reference counts.

Well, if you have a bunch of actual pure pointers to object X, you can never replace object X. Imagine if you want to dynamically load/unload the underlying resource, or replace it with a different level of detail model, for example. If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. If all of your parent-objects just say "give me object 8 out of that array" (basically) then you can load/unload or replace object 8 any time.

There are times that direct pointers can be better, but they're not obviously better for all use cases.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

AntiPseudonym posted:

We're talking about a pointer to a wrapper, though, not a direct pointer to the D3DTexture. You could just replace the actual pointer within the wrapper and everything else will start using it, same as replacing the object in the array.

Ah, I see what you're suggesting.

I think that approach would work as well, I just think it's a less conventional approach. Frankly it'd probably be a better way of thinking for vastly multi-threaded environments.

Unormal fucked around with this message at 17:36 on Sep 10, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

Contero posted:

I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine.

Where should I go? What should I be reading?

I recently went through the same exercise; after I figured a lot of it out, I ran across this book, which summed up most of the tricks I had collected from a lot of other sources:

http://www.amazon.com/OpenGL-4-0-Shading-Language-Cookbook/dp/1849514763/ref=sr_1_fkmr2_1?ie=UTF8&qid=1328669428&sr=8-1-fkmr2

Adbot
ADBOT LOVES YOU

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

OneEightHundred posted:

Remind me what the Quadro and Fire are better at doing than the consumer cards again?

Increasing profit margins?

They used to be better at line and other base primitive drawing, for wireframe cad application; not sure if that's true anymore.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply