Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Brownie
Jul 21, 2007
The Croatian Sensation

Volguus posted:

I have a little problem and I'm wondering if there's a better solution than to what i came up with:

I am displaying images on a rectangular surface. I am calculating the aspect ratio of the image and I am trying to fit it into the surface as best as I can without distorting. All good. Now, the user is able to draw rectangles on the image (with the mouse). The rectangles are drawn on the screen resolution, but the coordinates are going to be transformed into the image resolution. When the image and the rectangles are drawn, the image-coordinates are drawn unto the image directly and then the entire picture is resized for the screen.
Because of these transformations it is inevitable that I will suffer some rounding errors here and there. The best results I got was with "std::round" of every value:
code:
    double scaledX = x * sx;
    double scaledY = y * sy;
    double scaledWidth = ((width +1) * sx) - 1;
    double scaledHeight = ((height +1) * sy) -1;
where sx was the scale on the X axis and sy was the scale on the Y axis.

Surely this is not a new thing. Is there a better way of going about it or this is the best I can do at the moment?

One idea is to just calculate the transformation matrix required to take your image and transform it into u,v space the way that you want, and then use that to find the inverse transformation matrix. Then it becomes pretty trivial to map (x, y)_image to (u, v) and vice versa. Not sure how you reduce floating point errors, but you shouldn't be rounding anything?

Adbot
ADBOT LOVES YOU

Brownie
Jul 21, 2007
The Croatian Sensation
Similarly, I'm trying to implement a deferred renderer using light volumes and am currently scratching my head a bit at how to reconstruct world position in the actual shading pass.

I'm using the stencil buffer to determine which pixels are within the light volume (a sphere for point lights) as described in http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.html, https://www.3dgep.com/forward-plus/#Deferred_Shading and others. I'm using BGFX (a cross-platform rendering lib) so I had to adjust the approach a bit, where the stencil bits get "unmarked" when they fail the depth criteria. I have to do that because BGFX has no `glClear` equivalent so it makes it easier to reset the bits. (Why not? No idea tbh)

Basically the procedure is the following (for each light):
1) Draw the light volume, with no culling, using stencil tests for the front and back face to mark the fragments we aren't interested in shading (by incrementing). There's no writes to RGB or depth buffers at this point.
2) Use the stencil to draw the light volume again, this time with front culling. Shade the fragments that pass the stencil test (e.g. weren't unmarked in the first draw) using the g-buffers attached as textures and additive blending.
2a) As part of the stencil test, replace the stencil value with zero on a passing depth test (which is ALWAYS for this draw). This gets around the need to call `glClear` like I've seen most posts suggest.

I encounter two problems, one less annoying than the other.

The first problem is that when I perform the first draw, I noticed that if a fragment's stencil value should be updated by both the front AND back stencil test, I only see a value of 1 (corresponding to only one increment). That's okay for my purposes (since once either tests pass, the pixel is unmarked), but I just wanted to confirm that at I shouldn't expect both tests to mutate the stencil buffer.

The second problem, is that when I perform the second pass and go to shade the fragment, I'm not able to read from the depth buffer since it's part of the currently bound FBO and is used for the stencil test. So I don't have a way of reconstructing world/view position inside of my lighting shader. I'm not storing positions in my g-buffers, so this is annoying. For the moment I'm storing the depth buffer twice: inside of a D24S8 buffer as well as a R32F buffer that I am able to bind as a texture for the second pass. But this seems kind of wrong? And most of the blog posts I've read seem to totally gloss over this point, so I feel like I'm missing something obvious or am totally doing this all wrong.

Any pointers or clarification would be greatly appreciated!

Brownie
Jul 21, 2007
The Croatian Sensation

Absurd Alhazred posted:

Are you sure you've disabled backface culling, too?

Yeah, using RenderDoc I can see that the Cull Mode is set to NONE. If it was BACK or FRONT I'd also see artifacts in the final rendered image, but I don't, and manual inspect on the stencil shows that all the pixel I expect to be marked are marked, just some of them are only marked once instead of twice like I expect.

(As an aside: thank the lord for tools like RenderDoc)

Brownie
Jul 21, 2007
The Croatian Sensation

Absurd Alhazred posted:

I've not had a lot of experience with stencil tests, there's also something where you have to ask it to perform the test on the backface, right? Is that set?

Yeah that's correct. You can pass it two different stencil functions, one for the front or back face:



Here's the depth test that's available in RenderDoc for one of the light volumes. It shows the test passing/failing as green/red, but notably it's only for the front face. So the column that's in front of the volume gets marked correctly:



Here's the resulting stencil buffer though, which does show that the backface tests are also working (evidenced by parts of the balcony that are beyond the volume being marked)



But I'm surprised the the parts of the column that overlap with the openings in the balcony beyond are not marked twice (which would show up as white, since the image is scaled to be between [0,2] instead of [0, 255]).

Like I said, this isn't actually causing me issues with the actual shading, since the stencil test output is good enough -- being marked twice isn't anymore useful that being marked just once, in my case.

Brownie fucked around with this message at 16:52 on Jun 22, 2019

Brownie
Jul 21, 2007
The Croatian Sensation

Lime posted:

The portion of the light volume's back faces that are behind the column fail the depth test, just like the front faces behind the column did. But for back faces, the fail op is keep now, so that's why the stencil buffer isn't incremented twice there.

Indeed, the stencil buffer will never be incremented twice as configured because that would require a front face of the light volume to fail the depth test at the same place a back face passes: i.e., to have a front face be behind a back face, which for a convex volume is impossible (barring precision issues of course).


:ughh: that makes perfect sense actually. Thanks for clearing that up for me.

Now I just need to understand how people are reconstructing position in the second draw if they're also binding the depth-stencil buffer as part of the framebuffer. I will just keep duplicating the data for now, since I am not currently bandwidth limited in my lovely little Sponza scene.

Brownie
Jul 21, 2007
The Croatian Sensation

Lime posted:

Is the second pass actually using the depth buffer? It sounds like the shading of fragments is determined entirely by the stencil buffer / g-buffers, and that updating the stencil buffer doesn't need it either because the depth func is ALWAYS. Can't you just disable depth testing and use a framebuffer object that has the same stencil buffer attachment but no depth buffer attached? Then you can freely use the depth texture as a texture. I've never used BGFX but it does seem to have a D0S8 texture format, suggesting a framebuffer can have separate attachments for depth and stencil buffers.

The second draw is not using the depth buffer and is already set to ALWAYS. It just uses the stencil test, and also performs writes to the stencil test in order to "clear" the marked pixels on failure so that the next light volume doesn't have garbage in the stencil buffer. So it still needs write access to the packed buffer.

Unfortunately I don't think BGFX has an API for attaching the packed depth-stencil buffer as only a stencil buffer to the framebuffer. And it's still not clear that this would allow my to read from that packed buffer in my shader? Everything I've read is that this is basically undefined behaviour in OpenGL / DX11. Doing it anyway (setting the depth-stencil as a texture while also having it bound to the active FBO) just yields a black screen and a poo poo ton of warnings that you aren't allowed to do that, which is exactly what you'd expect/want.

Additionally, in OpenGL at least, implementations are not required to support attaching separate buffers for the depth and stencil, so it's basically not supported.

quote:

Rendering the final light volume requires depth testing.

Well no, not quite since I've got everything I need in that stencil buffer that I posted above. So I'm actually not performing depth testing on the second draw that performs shading.

Brownie
Jul 21, 2007
The Croatian Sensation

Lime posted:

Ah, okay. This is what I meant rather than aliasing one buffer, and I fell right into the trap of thinking what the API says is logically possible is also what's physically possible.

Yup. It's surprising considering how common this light volume technique seems, you'd think there'd be real demand to be able to just separate the two buffers entirely.

Brownie
Jul 21, 2007
The Croatian Sensation
Similarly, I'm trying to learn some D3D 12 and man the API is weird. Coming from OpenGL and Vulkan I've found the DXGI patter really kind of bizarre. For example: there are 4 different IDXGIAdapter interfaces: IDXGIAdapter1, IDXGIAdapter2, IDXGIAdapter3, and IDXGIAdapter4. The docs have basically no explanation on why you'd use one or that other, just that some were released in newer DXGI interfaces. Looking at Microsoft's D3D12 examples, they use IDXGIAdapter1... why? No idea.

Does anyone have any good resources on how to reason about API quirks like this? It's obvious to me it has to do with the D3D ecosystem being around for so long, but I don't understand if there are parts of this I should be ignoring or what.

Brownie
Jul 21, 2007
The Croatian Sensation
I'm trying to work with some skinned FBX models exported from Maya and I noticed that vertex normal data matches the bind pose, while the position data does not. What is the reasoning behind this? Do I just have to apply the bind shape matrix to the positions (and not the normals) before using the model the way I expect? Or am I missing something completely?

Brownie
Jul 21, 2007
The Croatian Sensation
It'd make a lot more sense if everything was pre-transformed into the bind pose, or nothing was, but it's not what I'm seeing. Unfortunately I'm working with the model in WebGL, so I'm relying on Three.js's loader and do not really have access to the SDK. But loading the model into Blender shows the same thing: the mesh's vertex positions and normals do not match when ignoring the skeleton + pose, and when using the skeleton and default pose, the mesh has visible incorrect normals (because it is effectively applying the same transformation twice).

Not sure if both Blender and Three.js are doing this wrong or if the export isn't being performed properly? I have never used Maya or handled FBXs before so I'm a bit lost as to how I can validate that the model data is incorrect vs the implementation of these loaders.

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

Don't load FBXs on the client. It's not meant to be a redistributable format. Write your own exporter using the SDK. Community reimplementations can be wrong.

glTF is very poor as an intermediate format. But definitely load it on the client if you can.

Really wish I had any say in whether we allowed FBXs but unfortunately it's part of our "offering" that users can upload and use FBXs to populate their scenes.

I might just use the FBX SDK to dump the mesh data to an OBJ file and use that to validate my suspicion that the mesh is being exported incorrectly.

Adbot
ADBOT LOVES YOU

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

Let me ask you a different question: if you have a skinned mesh (e.g. more than one bone influence per vertex), what space could it be in other than bind-pose space? You could store it arbitrarily in the space of one of the bones, but bind-pose space would be easier and more efficient.

I'm not sure what space it could be in, *other* than bind-pose. Obviously, for normals, if the bone is just translation, then there's no difference between bind-pose and non-bind-pose transforms.

Yeah you're right, so it looks like my assumption was wrong. I got another export from the client and it looks like the normals match model's position in the first frame of animation! So the positions are correctly in bind-pose space, but the normals aren't (the reverse of what I believed earlier). The earlier model had an animation that started in a T-pose that was slightly different from the bind pose, which is why i was so confused.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply