|
The best answer is "a physics textbook". Understanding lighting models requires understanding both good light reflects off of objects and how geometry works.
|
# ? Sep 28, 2014 05:20 |
|
|
# ? Apr 25, 2024 06:40 |
|
Suspicious Dish posted:The best answer is "a physics textbook". Understanding lighting models requires understanding both good light reflects off of objects and how geometry works. Bummer. Yeah I'm having a tough time of it debugging my issues. For example : Vertex Shader Snippet: code:
code:
Edit: Hmmmm it looks like transforming everything into model (world?) space was a poo poo idea. View space seems to be the way to go. Which made my vertex shader look like : code:
and the render look like : Which doesn't look 100% ridiculous to my eyeball. Tres Burritos fucked around with this message at 01:38 on Sep 29, 2014 |
# ? Sep 28, 2014 19:44 |
Can anyone here explain what's happening or can you point me to a resource on the subject? Oh well, gently caress me. I just realised that the GPU draws triangles, and not entire geometric shapes all at once, so it gets three normals that extrude from the corner points of the triangle. Sorry about that. Joda fucked around with this message at 05:56 on Oct 6, 2014 |
|
# ? Oct 6, 2014 05:37 |
|
This is probably a bit of a weird question, but does anyone know how to disable perspective correction on textures in OpenGL, or at least emulate it via shaders? I'm trying to go for an authentic PS1 retro look and I think removing perspective correction would help add to the feel.
|
# ? Oct 10, 2014 03:50 |
|
AntiPseudonym posted:This is probably a bit of a weird question, but does anyone know how to disable perspective correction on textures in OpenGL, or at least emulate it via shaders? I'm trying to go for an authentic PS1 retro look and I think removing perspective correction would help add to the feel. Reading this: http://www.glprogramming.com/red/chapter09.html#name17 posted:When the four texture coordinates (s, t, r, q) are multiplied by the texture matrix, the resulting vector (s' t' r' q') is interpreted as homogeneous texture coordinates. In other words, the texture map is indexed by s'/q' and t'/q' . It seems that if you can specify Q=1 in your texture coordinates, you can get the effect you want. Edit: Hmm I think the default of Q is 1, so perhaps there's another operation going on afterwards to do the perspective correction. Edit: Or how about this? quote:If you use a vertex shader, multiply the texture coordinate by the W of the vertex position after you''ve applied the projection transform to it. HiriseSoftware fucked around with this message at 04:47 on Oct 10, 2014 |
# ? Oct 10, 2014 04:42 |
|
You can directly control that via an interpolation qualifier. noperspective is what you want.
|
# ? Oct 10, 2014 22:40 |
|
I think I've solved this as I was typing it up, but I'm going to ask anyway because I don't like my solution. I'm trying to draw some rectangular prisms and independently control their location/scale/pose/etc. This is what I've got: code:
code:
code:
It looks like I can fix this by backing the transformations after the call to glDrawArrays: code:
Any advice is welcomed. fritz fucked around with this message at 17:43 on Oct 17, 2014 |
# ? Oct 17, 2014 17:40 |
|
fritz posted:I think I've solved this as I was typing it up, but I'm going to ask anyway because I don't like my solution. No don't!! Construct your matrices on the cpu and use shaders to do the transformation If you using gls stupid matrix stack check yourself before you wreck yourself
|
# ? Oct 18, 2014 21:01 |
|
Is there any public information on what the OpenGL NG API might look like?
|
# ? Oct 18, 2014 21:39 |
|
Nothing like OpenGL, HTH. Take a look at Mantle or Metal for an idea of where 3D APIs are going.
|
# ? Oct 18, 2014 22:09 |
|
kraftwerk singles posted:Is there any public information on what the OpenGL NG API might look like? The only thing public is this: https://www.khronos.org/assets/uploads/developers/library/2014-siggraph-bof/OpenGL-Ecosystem-BOF_Aug14.pdf Starts on slide 67. There's some private info, but I can't share that yet.
|
# ? Oct 18, 2014 22:14 |
|
Malcolm XML posted:No don't!! OK, I'm now using a model/view/projection thing, and every prism has its own model matrix, so it's something like: code:
Alternatively I could bind the model parameters to a series of uniforms and do the computation in the shader? (they're just a scale/rotation/translation of the prisms). When I go to adding the full specification of the various objects, should I just lump them all into one big contiguous section of memory on the heap (like with a std::vector<float>), bind it to the buffer once, set the MVP for each object, and call glDrawArray with different offsets?
|
# ? Oct 19, 2014 19:24 |
|
fritz posted:OK, I'm now using a model/view/projection thing, and every prism has its own model matrix, so it's something like: Also, if you are drawing hundreds/thousands of these, you might wanna look into instanced rendering, since your geometry is the same across all draw calls.
|
# ? Oct 19, 2014 20:00 |
|
I appear to be missing something from the sequence of things for OpenGL rendering.code:
|
# ? Oct 20, 2014 06:35 |
|
roomforthetuna posted:I appear to be missing something from the sequence of things for OpenGL rendering. I didn't ever figure out what was causing the problem there, instead I tried to introduce glDebugMessageCallback in the hope that it would give me a hint. It didn't work because I was using too low of a version that didn't support glDebugMessageCallback, so I went to use a higher version. I changed from using glut to glfw. The higher version wouldn't start. I manually forced the binary to run with the other GPU of my laptop, and now the higher version would start, and glDebugMessageCallback exists. But with those settings, nothing renders (even with the setup where something was rendering before) *and* there is no debug error message. (glClear is still working though, the background is the color I specified.) If I turn the version back down while using the newer GPU, and issue the same commands as before, I do still get the base rendering, and then there's no glDebugMessageCallback available. Sweet. OpenGL you are amazing, and I love how I'm either supposed to use a version that one of my GPUs doesn't support or I'm using deprecated functions.
|
# ? Oct 24, 2014 15:34 |
|
Did you try putting glGetError() after EVERY GL command and see where it first fails?
|
# ? Oct 24, 2014 16:12 |
|
Newer GL versions need a user created and bound vertex attribute object to store the state set by glEnableVertexAttribArray and glVertexAttribPointer. Try putting this in your initialization code:code:
The_Franz fucked around with this message at 21:15 on Oct 24, 2014 |
# ? Oct 24, 2014 21:09 |
|
HiriseSoftware posted:Did you try putting glGetError() after EVERY GL command and see where it first fails? I did at least discover why glDebugMessageCallback wasn't working - it needs both glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS) *and* glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE), and I only had one of them. The_Franz posted:Newer GL versions need a user created and bound vertex attribute object ...
|
# ? Oct 25, 2014 01:04 |
|
It looks like I need to just ask this because outdated answers from the internet are no answers at all. What's the least frustrating way to use OpenGL such that it will work with minimal effort cross-platform, ideally including both mobile devices and Intel GPUs that are several years old (and regular modern hardware too of course)? I'm not looking to do anything fancier than a bunch of single textured triangles with alpha blending and render targets (for a 2D game), so I don't need any kind of advanced shader functionality. Is it going to end up being "use OpenGL ES for new hardware and write different code and different shaders to use an older version of OpenGL for old hardware", or will OpenGL ES work on older PC hardware, or...?
|
# ? Oct 26, 2014 03:48 |
|
roomforthetuna posted:It looks like I need to just ask this because outdated answers from the internet are no answers at all. WebGL? It's basically ES 2.0 I think and now runs on the latest IOS...
|
# ? Oct 26, 2014 04:56 |
|
Tres Burritos posted:WebGL? It's basically ES 2.0 I think and now runs on the latest IOS... I don't mind having to include a special wrapper like Android will obviously need to run a C++ thing. But I'm hoping I don't also have to do special shaders and special interfaces to shaders too. Edit: It's "do everything twice", isn't it. Otherwise it wouldn't make sense for Unity to have made their own intermediate shader language. roomforthetuna fucked around with this message at 18:16 on Oct 26, 2014 |
# ? Oct 26, 2014 05:13 |
roomforthetuna posted:My fault not being clear with what I'm going for - I mistakenly said something that works cross platform, what I really want is something I can compile to run natively on various platforms. I definitely don't want to need embedded web junk or javascript involved in any way. (But thanks for the answer, it was a fine answer to the question as originally asked.) I hope the assembled will forgive me for recommending this, but libGDX may be worth investigating. It won't really compile to native code, as it's Java based, but it will run on Android and on Desktops with minimal fuss. I think there's iOS stuff, too, through RoboVM. RoboVM might be able to compile native apps, too. I swear by it for almost anything mobile. https://github.com/libgdx
|
|
# ? Oct 27, 2014 00:55 |
|
In case anyone is curious or encounters a similar problem, it turns out my mistake was this:roomforthetuna posted:
|
# ? Oct 28, 2014 07:23 |
|
Right, because the vertex attributes are associated with the vertex buffer which was bound at the time of the call to glVertexAttribPointer. Each attribute has its own vertex buffer handle which it sources data from. It doesn't change when you bind another vertex buffer afterwards. I don't see any VAO (vertex array object) setup in your code. That's what the attributes and index buffer handle are stored inside. You're supposed to create your VAOs once at the beginning and then bind them when you want to draw. They control all your vertex state in just one call. The code should look roughly like this: C++ code:
Spatial fucked around with this message at 23:24 on Oct 28, 2014 |
# ? Oct 28, 2014 23:22 |
|
Spatial posted:I don't see any VAO (vertex array object) setup in your code. That's what the attributes and index buffer handle are stored inside. You're supposed to create your VAOs once at the beginning and then bind them when you want to draw. They control all your vertex state in just one call. (With a newer OpenGL target, on my other GPU, I do get a bunch of errors about not having a something or other, but I don't want to be coding for that target because the older version works on both GPUs.) Edit: no, wait, I was misunderstanding you. I do create vertex array objects, I was just pseudo code showing my render function, not the initialization. What I was missing is that the VAO stores attributes and index buffer handles - I assumed the vertex array object just stored an array of vertices, like the name implies, and that I always had to re-bind everything else before calling a render. So which things do I have to re-bind when rendering? The VAO, but not the index buffer, not the attributes, not the uniforms? How about glBindTexture - affiliated with the VAO such that I can leave it alone, or not? If I use the same shader with two different VAOs, and there is a uniform in that shader, can I set the uniform once for each VAO and then leave it alone, or is that value affiliated with the shader rather than the VAO? roomforthetuna fucked around with this message at 02:25 on Oct 29, 2014 |
# ? Oct 29, 2014 02:14 |
|
Yeah it's pretty confusing. I made the exact same mistake at first. VAO scope is fairly limited, it's purely a vertex setup thing. All that's stored is GL_ELEMENT_ARRAY_BUFFER_BINDING and these values for each attribute: GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING GL_VERTEX_ATTRIB_ARRAY_ENABLED GL_VERTEX_ATTRIB_ARRAY_SIZE GL_VERTEX_ATTRIB_ARRAY_STRIDE GL_VERTEX_ATTRIB_ARRAY_TYPE GL_VERTEX_ATTRIB_ARRAY_NORMALIZED GL_VERTEX_ATTRIB_ARRAY_INTEGER GL_VERTEX_ATTRIB_ARRAY_DIVISOR GL_VERTEX_ATTRIB_ARRAY_POINTER You still have to bind shaders and textures etc as you were doing before. Lame I know. Also it's important to remember that it doesn't keep track of the bound vertex buffer, only the index buffer. If you're only calling drawing functions it's fine but if you want to manipulate the vertex buffer data with glBufferSubData() or the like, you need to bind GL_ARRAY_BUFFER manually each time.
|
# ? Oct 29, 2014 02:57 |
|
Spatial posted:You still have to bind shaders and textures etc as you were doing before. Lame I know. So what's the deal with uniforms? If I'm using the same shader with a different uniform value to render two array buffers, is changing the uniform going to cause a "block until the previous operation is complete" like changing data in an array buffer would, such that I should instantiate two copies of the same shader program instead?
|
# ? Oct 29, 2014 05:53 |
|
roomforthetuna posted:So what's the deal with uniforms? If I'm using the same shader with a different uniform value to render two array buffers, is changing the uniform going to cause a "block until the previous operation is complete" like changing data in an array buffer would, such that I should instantiate two copies of the same shader program instead?
|
# ? Nov 3, 2014 15:56 |
|
Spatial posted:Couldn't say. It seems like the sort of thing that would be easily buffered into a command stream by the driver and it's a really common use case. You would hope it would be optimised heavily. But then we're talking about OpenGL drivers so...
|
# ? Nov 4, 2014 02:14 |
|
Nothing in a GL driver ever blocks. It just copies.
|
# ? Nov 4, 2014 05:16 |
|
roomforthetuna posted:It looks like I need to just ask this because outdated answers from the internet are no answers at all. May be a bit of an overkill but you could use OpenSceneGraph....especially if you are not rolling out shaders.
|
# ? Nov 4, 2014 05:45 |
|
Suspicious Dish posted:Nothing in a GL driver ever blocks. It just copies.
|
# ? Nov 4, 2014 06:33 |
|
Drawing calls in GL are guaranteed to behave as if they were done in serial, with one waiting until the other is finished before performing. However, if the GPU can recognize that two calls can run in parallel without any observable effects (one render call renders to the top left of the framebuffer, the other to the bottom right), then it might schedule both threads at once.
|
# ? Nov 4, 2014 07:59 |
|
Suspicious Dish posted:Nothing in a GL driver ever blocks. It just copies. On a related note I fixed horrific stuttering (e.g. occasionally taking 200ms to draw a frame) in my OpenGL renderer at the weekend caused by me being an idiot and calling glFinish 4 times per frame. That certainly does block, but I now know you shouldn't ever need it.
|
# ? Nov 4, 2014 15:57 |
|
While GL calls won't necessarily block, there are several calls that can force a CPU-GPU sync and silently kill your performance. For instance, calling any of the glGet* calls is bad so it's generally better to shadow this state yourself. Using good buffer management is also key since certain buffer management techniques are much faster than others. Using the 'naive' map/unmap approach when updating buffers is another way to kill your performance since it can cause a sync. There have been some good presentations on OpenGL buffer management and performance over the last year: The AZDO presentation from GDC. A similar talk given at Steam Dev Days: https://www.youtube.com/watch?v=-bCeNzgiJ8I A writeup on modern buffer handling by AMD's OpenGL driver guy. MarsMattel posted:On a related note I fixed horrific stuttering (e.g. occasionally taking 200ms to draw a frame) in my OpenGL renderer at the weekend caused by me being an idiot and calling glFinish 4 times per frame. That certainly does block, but I now know you shouldn't ever need it. I remember reading that some drivers just make glFinish a nop since most people don't actually use it correctly. If you want to enforce some kind of syncing these days it's better to manually set and block on fences. The_Franz fucked around with this message at 16:21 on Nov 4, 2014 |
# ? Nov 4, 2014 16:16 |
|
Yeah, I certainly didn't need it. It just seemed like a reasonable thing to do (e.g. after finishing the shadow map pass, the first pass of a deferred renderer) to make sure those stages were complete before the following stages attempted to use their output. The calls were in for months and months before they caused problems, its only recently as I've started rendering & uploading and releasing a lot more data that I started running into problems. Which leads to a something I've been wondering about, I've implemented an 'infinite' voxel terrain renderer (https://www.youtube.com/watch?v=rhlt5HwfOhY) which requires uploading and releasing data pretty much constantly. Will that always cause slow downs or is there a way to organise things to avoid that? I had a quick look at that post by the AMD guy and it seems persistent maps would be better than my glBufferData calls?
|
# ? Nov 4, 2014 16:57 |
|
Yeah, if you are generating a lot of dynamic geometry the current best thing to do is use a persistently mapped buffer. It is more work on the application side since you are responsible for manual memory management and fencing to make sure that you don't trample data that the GPU is currently working on, but the performance gains can be quite large. There is a nice brief overview of how to do this starting at slide 83 of the AZDO presentation.
|
# ? Nov 4, 2014 17:41 |
|
Right. Thanks for the help!
|
# ? Nov 4, 2014 17:47 |
|
The way to imagine it, because the OpenGL specification is specified in terms of this, and any observable difference of this behavior is a spec violation, is that whenever you call gl*, you're making a remote procedure call to some external rendering server. So you can batch up multiple glDrawElements calls, but whenever you query something from the server, you have to wait for the rest of everything to finish. OpenGL is based on SGI hardware and SGI architecture from the 80s. If you're ever curious why glEnable and some other method with a boolean argument are two separate calls, it's because on SGI hardware, glEnable hit one register with a bit flag, and that other method hit another. glVertex3i just poked a register as well. You can serialize these over the network quite well, so why not do it?
|
# ? Nov 4, 2014 18:58 |
|
|
# ? Apr 25, 2024 06:40 |
|
Hey fellas, I've been doing a rare foray into graphics stuff and I've written a shader for Unity that does palette based textures using a palette indexed picture and a palette texture, it's entirely to do old palette shifting based effects and it's almost definitely horribly inefficient, but it works in a limited capacity but I had to add a constant I don't understand. Anyway here's the shader:code:
So I was wondering why I have to add the +0.01 to the x of the tex2D call, if I have it without the +0.01 it misses one of the colours and everything is indexed wrong and I suspect if when I make it support multiple lines of palettes per file for power of 2 textures and whatnot, i'll have to add the same to the y. Any help would be fab. Also is there a way to get the dimensions of a texture/sampler or do you really have to pass them in each time? edit: I added support for square palettes and for some reason now I need to subtract 0.01 to the y component of the tex2D call instead of add it, but I still have to add an odd constant that's bound to go wrong when the palette textures gets big and I don't know why brian fucked around with this message at 11:18 on Nov 5, 2014 |
# ? Nov 4, 2014 21:15 |