Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The best answer is "a physics textbook". Understanding lighting models requires understanding both good light reflects off of objects and how geometry works.

Adbot
ADBOT LOVES YOU

Tres Burritos
Sep 3, 2009

Suspicious Dish posted:

The best answer is "a physics textbook". Understanding lighting models requires understanding both good light reflects off of objects and how geometry works.

Bummer. Yeah I'm having a tough time of it debugging my issues. For example :



Vertex Shader Snippet:
code:
//translate lighting info into modelspace
modelSpaceVertexNormal = normal;
modelSpaceVertexPosition = vec3(modelMatrix * vec4(heightPosition,1.0));
modelSpacePointLightPositions[0] = vec3(modelMatrix * vec4(pointLightPositions[0],1.0));				
Most of the fragment shader :
code:
void main()
{
    //calculate vectors
    vec3 L = normalize(modelSpacePointLightPositions[0] - modelSpaceVertexPosition);
    vec3 N = normalize(modelSpaceVertexNormal);
    vec3 V = normalize(cameraPosition);

    //calculate lambert
    float lambertTerm = dot(N,L);

    //ambient is easiest, lets get that first
    pointAmbient = pointLightAmbients[0] * materialAmbient;

    pointDiffuse = lambertTerm * materialDiffuse * pointLightDiffuses[0];

    //calculations for specular
    vec3 R = reflect(L, N);
    float specular = pow( max(dot(R, V), 0.0), shininess);
    pointSpecular = specular * materialSpecular * pointLightSpeculars[0];

    pointIllumination = pointAmbient + pointDiffuse + pointSpecular;
    pointIllumination.a = 1.0;



    //RGBA
    //gl_FragColor = vec4(1.0,0.0,0.0,0.3);
    //gl_FragColor = pointLightSpeculars[0];
    gl_FragColor = pointIllumination;
}
I haven't the slightest why only half of the stuff in my scene is getting sort of lit. My best guess was that the light wasn't in the same "space". So I started playing around with locations / matrices that were going to the fragment shader and this is the result I got. Playing with the normal and "normalMatrix" also seemed to produce some funky results. I'm going off of the phong page from wikipedia as well as this which seemed like a good starting place. I'm just trying to get a point light working instead of a directional light.

Edit: Hmmmm it looks like transforming everything into model (world?) space was a poo poo idea. View space seems to be the way to go. Which made my vertex shader look like :

code:
//translate lighting info into viewspace
modelSpaceVertexNormal = normalMatrix * normal;
modelSpaceVertexPosition = vec3(modelViewMatrix * vec4(heightPosition,1.0));
modelSpacePointLightPositions[0] = vec3(viewMatrix * vec4(pointLightPositions[0],1.0));
which makes sense since the light is already in model space (durr durr).
and the render look like :


Which doesn't look 100% ridiculous to my eyeball.

Tres Burritos fucked around with this message at 01:38 on Sep 29, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I just implemented a phong shader, but am not really sure why it works. I calculate the vertex vector, vertex-light vector and the vertex normal in the vertex shader (the way I implemented it, everything is in eye space,) and send them as out fields to the fragment shader. The thing that has me baffled is why it interpolates, say, the light vector properly across all fragments, while also interpolating the surface normal so it is uniform for all fragments across the same surface. As far as I can tell, it has no real way of telling when it's interpolating normals and when it's interpolating a position or direction vector that is not uniform across the surface, as all out fields are just plain vec4s. I even tested it by telling the fragment shader to draw the normals and the light vector respectively as the colour, which confirmed that it is actually interpolating like it should to produce the right result. I just don't understand how it knows the difference.

Can anyone here explain what's happening or can you point me to a resource on the subject?


Oh well, gently caress me. I just realised that the GPU draws triangles, and not entire geometric shapes all at once, so it gets three normals that extrude from the corner points of the triangle. Sorry about that.

Joda fucked around with this message at 05:56 on Oct 6, 2014

AntiPseudonym
Apr 1, 2007
I EAT BABIES

:dukedog:
This is probably a bit of a weird question, but does anyone know how to disable perspective correction on textures in OpenGL, or at least emulate it via shaders? I'm trying to go for an authentic PS1 retro look and I think removing perspective correction would help add to the feel.

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.

AntiPseudonym posted:

This is probably a bit of a weird question, but does anyone know how to disable perspective correction on textures in OpenGL, or at least emulate it via shaders? I'm trying to go for an authentic PS1 retro look and I think removing perspective correction would help add to the feel.

Reading this:

http://www.glprogramming.com/red/chapter09.html#name17 posted:

When the four texture coordinates (s, t, r, q) are multiplied by the texture matrix, the resulting vector (s' t' r' q') is interpreted as homogeneous texture coordinates. In other words, the texture map is indexed by s'/q' and t'/q' .

It seems that if you can specify Q=1 in your texture coordinates, you can get the effect you want.

Edit: Hmm I think the default of Q is 1, so perhaps there's another operation going on afterwards to do the perspective correction.

Edit: Or how about this?

quote:

If you use a vertex shader, multiply the texture coordinate by the W of the vertex position after you''ve applied the projection transform to it.

HiriseSoftware fucked around with this message at 04:47 on Oct 10, 2014

Spatial
Nov 15, 2007

You can directly control that via an interpolation qualifier. noperspective is what you want.

fritz
Jul 26, 2003

I think I've solved this as I was typing it up, but I'm going to ask anyway because I don't like my solution.

I'm trying to draw some rectangular prisms and independently control their location/scale/pose/etc. This is what I've got:

code:
// cube_vertices and cube_colors are just the data for a cube w/ corners from (-1,-1,-1) to (1,1,1)
void drawCube(const glm::vec3& position,
              const glm::vec3& size,
              const float angle,
              const glm::vec3& pose){
  glRotatef(angle, pose.x, pose.y, pose.z);
  glScalef(size.x, size.y, size.z);
  glTranslatef(position.x, position.y, position.z);

  /* We have a color array and a vertex array */
  glEnableClientState(GL_VERTEX_ARRAY);
  glEnableClientState(GL_COLOR_ARRAY);
  glVertexPointer(3, GL_FLOAT, 0, cube_vertices);
  glColorPointer(3, GL_FLOAT, 0, cube_colors);
  /* Send data : 24 vertices */
  glDrawArrays(GL_QUADS, 0, 24);

  /* Cleanup states */
  glDisableClientState(GL_COLOR_ARRAY);
  glDisableClientState(GL_VERTEX_ARRAY);

}
which works great except that when I call drawCube multiple times, if I apply any nonzero scaling, rotation, or translation to one cube, it persists across calls. That is,

code:
        drawCube(glm::vec3(0,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(5,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,5,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,0,5), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
draws four cubes with parallel axes centered at the first args, but if I change it to
code:
        drawCube(glm::vec3(0,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(5,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,5,0), glm::vec3(1,1,1), 45, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,0,5), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
the 3rd and 4th cubes are both rotated. So obviously I'm keeping state between calls that I don't want, but I don't know enough about gl (this is only my 3rd full day of trying to do gl by working thru online tutorials and stackoverflow...)


It looks like I can fix this by backing the transformations after the call to glDrawArrays:
code:

  glTranslatef(-position.x, -position.y, -position.z);
  glScalef(1/size.x, 1/size.y, 1/size.z);
  glRotatef(-angle, pose.x, pose.y, pose.z);
or, as I realized belatedly, surround things with glPushMatrix and glPopMatrix. I'm reading that the push/pop stuff has been deprecated for some time, which is making me think that I'm going down a path that I might later regret. I'm expecting to eventually need a few dozen objects; for now these are going to be rectangular prisms, but pretty soon they'll have their own mesh data with anywhere from a few hundred to a few thousand triangles.

Any advice is welcomed.

fritz fucked around with this message at 17:43 on Oct 17, 2014

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

fritz posted:

I think I've solved this as I was typing it up, but I'm going to ask anyway because I don't like my solution.

I'm trying to draw some rectangular prisms and independently control their location/scale/pose/etc. This is what I've got:

code:

// cube_vertices and cube_colors are just the data for a cube w/ corners from (-1,-1,-1) to (1,1,1)
void drawCube(const glm::vec3& position,
              const glm::vec3& size,
              const float angle,
              const glm::vec3& pose){
  glRotatef(angle, pose.x, pose.y, pose.z);
  glScalef(size.x, size.y, size.z);
  glTranslatef(position.x, position.y, position.z);

  /* We have a color array and a vertex array */
  glEnableClientState(GL_VERTEX_ARRAY);
  glEnableClientState(GL_COLOR_ARRAY);
  glVertexPointer(3, GL_FLOAT, 0, cube_vertices);
  glColorPointer(3, GL_FLOAT, 0, cube_colors);
  /* Send data : 24 vertices */
  glDrawArrays(GL_QUADS, 0, 24);

  /* Cleanup states */
  glDisableClientState(GL_COLOR_ARRAY);
  glDisableClientState(GL_VERTEX_ARRAY);

}

which works great except that when I call drawCube multiple times, if I apply any nonzero scaling, rotation, or translation to one cube, it persists across calls. That is,

code:

        drawCube(glm::vec3(0,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(5,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,5,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,0,5), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
draws four cubes with parallel axes centered at the first args, but if I change it to
code:

        drawCube(glm::vec3(0,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(5,0,0), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,5,0), glm::vec3(1,1,1), 45, glm::vec3(1,0,0));
        drawCube(glm::vec3(0,0,5), glm::vec3(1,1,1), 0, glm::vec3(1,0,0));
the 3rd and 4th cubes are both rotated. So obviously I'm keeping state between calls that I don't want, but I don't know enough about gl (this is only my 3rd full day of trying to do gl by working thru online tutorials and stackoverflow...)


It looks like I can fix this by backing the transformations after the call to glDrawArrays:
code:


  glTranslatef(-position.x, -position.y, -position.z);
  glScalef(1/size.x, 1/size.y, 1/size.z);
  glRotatef(-angle, pose.x, pose.y, pose.z);

or, as I realized belatedly, surround things with glPushMatrix and glPopMatrix. I'm reading that the push/pop stuff has been deprecated for some time, which is making me think that I'm going down a path that I might later regret. I'm expecting to eventually need a few dozen objects; for now these are going to be rectangular prisms, but pretty soon they'll have their own mesh data with anywhere from a few hundred to a few thousand triangles.

Any advice is welcomed.

No don't!!


Construct your matrices on the cpu and use shaders to do the transformation

If you using gls stupid matrix stack check yourself before you wreck yourself

emoji
Jun 4, 2004
Is there any public information on what the OpenGL NG API might look like?

pseudorandom name
May 6, 2007

Nothing like OpenGL, HTH.

Take a look at Mantle or Metal for an idea of where 3D APIs are going.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

kraftwerk singles posted:

Is there any public information on what the OpenGL NG API might look like?

The only thing public is this:

https://www.khronos.org/assets/uploads/developers/library/2014-siggraph-bof/OpenGL-Ecosystem-BOF_Aug14.pdf

Starts on slide 67.

There's some private info, but I can't share that yet.

fritz
Jul 26, 2003

Malcolm XML posted:

No don't!!


Construct your matrices on the cpu and use shaders to do the transformation

If you using gls stupid matrix stack check yourself before you wreck yourself

OK, I'm now using a model/view/projection thing, and every prism has its own model matrix, so it's something like:

code:
// bind the triangles and color info, load shaders, etc
for(all prisms)
MVP = Projection * View * make_model(parameters);
// bind the MVP uniform
glDrawArray(GL_TRIANGLES, 0, 36);
(I also am now encoding the prism as 12 triangles instead of 8 quads, glDrawArray(GL_QUADS) doesn't seem to work).

Alternatively I could bind the model parameters to a series of uniforms and do the computation in the shader? (they're just a scale/rotation/translation of the prisms).


When I go to adding the full specification of the various objects, should I just lump them all into one big contiguous section of memory on the heap (like with a std::vector<float>), bind it to the buffer once, set the MVP for each object, and call glDrawArray with different offsets?

shodanjr_gr
Nov 20, 2007

fritz posted:

OK, I'm now using a model/view/projection thing, and every prism has its own model matrix, so it's something like:

code:
// bind the triangles and color info, load shaders, etc
for(all prisms)
MVP = Projection * View * make_model(parameters);
// bind the MVP uniform
glDrawArray(GL_TRIANGLES, 0, 36);
(I also am now encoding the prism as 12 triangles instead of 8 quads, glDrawArray(GL_QUADS) doesn't seem to work).

Alternatively I could bind the model parameters to a series of uniforms and do the computation in the shader? (they're just a scale/rotation/translation of the prisms).


When I go to adding the full specification of the various objects, should I just lump them all into one big contiguous section of memory on the heap (like with a std::vector<float>), bind it to the buffer once, set the MVP for each object, and call glDrawArray with different offsets?

Also, if you are drawing hundreds/thousands of these, you might wanna look into instanced rendering, since your geometry is the same across all draw calls.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
I appear to be missing something from the sequence of things for OpenGL rendering.
code:
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(program);
glUniform1ui(uniform_id, 0);
glVertexAttribPointer(...);
glEnableVertexAttribArray(...);
glBlendFunc(src_blend, dst_blend);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnable(GL_BLEND);
glDrawElements(GL_TRIANGLES, index_count, GL_UNSIGNED_SHORT, (void*)0);
glDisableVertexAttribArray(...);
// glBindBuffer(GL_ARRAY_BUFFER, some_other_buffer); // if I uncomment this line then the image no longer appears
glutSwapBuffers();
What am I missing that would enable me to render from more than one buffer?

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

roomforthetuna posted:

I appear to be missing something from the sequence of things for OpenGL rendering.
God drat it OpenGL.

I didn't ever figure out what was causing the problem there, instead I tried to introduce glDebugMessageCallback in the hope that it would give me a hint. It didn't work because I was using too low of a version that didn't support glDebugMessageCallback, so I went to use a higher version. I changed from using glut to glfw. The higher version wouldn't start. I manually forced the binary to run with the other GPU of my laptop, and now the higher version would start, and glDebugMessageCallback exists. But with those settings, nothing renders (even with the setup where something was rendering before) *and* there is no debug error message. (glClear is still working though, the background is the color I specified.)

If I turn the version back down while using the newer GPU, and issue the same commands as before, I do still get the base rendering, and then there's no glDebugMessageCallback available. Sweet. OpenGL you are amazing, and I love how I'm either supposed to use a version that one of my GPUs doesn't support or I'm using deprecated functions.

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
Did you try putting glGetError() after EVERY GL command and see where it first fails?

The_Franz
Aug 8, 2003

Newer GL versions need a user created and bound vertex attribute object to store the state set by glEnableVertexAttribArray and glVertexAttribPointer. Try putting this in your initialization code:

code:
/* Allocate and assign a Vertex Array Object to our handle */
glGenVertexArrays(1, &vao);
 
/* Bind our Vertex Array Object as the current used object */
glBindVertexArray(vao);
EDIT: Also, take a look at using GLUS for a platform abstraction library as that seems to be to modern replacement for GLUT. GLUT is so old it's not even listed on Khronos' library page anymore.

The_Franz fucked around with this message at 21:15 on Oct 24, 2014

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

HiriseSoftware posted:

Did you try putting glGetError() after EVERY GL command and see where it first fails?
I guess that's what I'm going to have to do if I want useful error messages without going to a version of OpenGL that supports glDebugMessageCallback (I'd rather avoid going that up-to-date because I'd like to support an integrated Intel GPU from a few years back, and apparently OpenGL 3.2 is too much for such a GPU.)

I did at least discover why glDebugMessageCallback wasn't working - it needs both glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS) *and* glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GL_TRUE), and I only had one of them.


The_Franz posted:

Newer GL versions need a user created and bound vertex attribute object ...
And that's what glDebugMessageCallback said, so you were right too, thanks! :)

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
It looks like I need to just ask this because outdated answers from the internet are no answers at all.

What's the least frustrating way to use OpenGL such that it will work with minimal effort cross-platform, ideally including both mobile devices and Intel GPUs that are several years old (and regular modern hardware too of course)? I'm not looking to do anything fancier than a bunch of single textured triangles with alpha blending and render targets (for a 2D game), so I don't need any kind of advanced shader functionality.

Is it going to end up being "use OpenGL ES for new hardware and write different code and different shaders to use an older version of OpenGL for old hardware", or will OpenGL ES work on older PC hardware, or...?

Tres Burritos
Sep 3, 2009

roomforthetuna posted:

It looks like I need to just ask this because outdated answers from the internet are no answers at all.

What's the least frustrating way to use OpenGL such that it will work with minimal effort cross-platform, ideally including both mobile devices and Intel GPUs that are several years old (and regular modern hardware too of course)? I'm not looking to do anything fancier than a bunch of single textured triangles with alpha blending and render targets (for a 2D game), so I don't need any kind of advanced shader functionality.

Is it going to end up being "use OpenGL ES for new hardware and write different code and different shaders to use an older version of OpenGL for old hardware", or will OpenGL ES work on older PC hardware, or...?

WebGL? It's basically ES 2.0 I think and now runs on the latest IOS...

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Tres Burritos posted:

WebGL? It's basically ES 2.0 I think and now runs on the latest IOS...
My fault not being clear with what I'm going for - I mistakenly said something that works cross platform, what I really want is something I can compile to run natively on various platforms. I definitely don't want to need embedded web junk or javascript involved in any way. (But thanks for the answer, it was a fine answer to the question as originally asked.)

I don't mind having to include a special wrapper like Android will obviously need to run a C++ thing. But I'm hoping I don't also have to do special shaders and special interfaces to shaders too.

Edit: It's "do everything twice", isn't it. Otherwise it wouldn't make sense for Unity to have made their own intermediate shader language.

roomforthetuna fucked around with this message at 18:16 on Oct 26, 2014

Jo
Jan 24, 2005

:allears:
Soiled Meat

roomforthetuna posted:

My fault not being clear with what I'm going for - I mistakenly said something that works cross platform, what I really want is something I can compile to run natively on various platforms. I definitely don't want to need embedded web junk or javascript involved in any way. (But thanks for the answer, it was a fine answer to the question as originally asked.)

I don't mind having to include a special wrapper like Android will obviously need to run a C++ thing. But I'm hoping I don't also have to do special shaders and special interfaces to shaders too.

Edit: It's "do everything twice", isn't it. Otherwise it wouldn't make sense for Unity to have made their own intermediate shader language.

I hope the assembled will forgive me for recommending this, but libGDX may be worth investigating. It won't really compile to native code, as it's Java based, but it will run on Android and on Desktops with minimal fuss. I think there's iOS stuff, too, through RoboVM. RoboVM might be able to compile native apps, too. I swear by it for almost anything mobile. https://github.com/libgdx

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
In case anyone is curious or encounters a similar problem, it turns out my mistake was this:

roomforthetuna posted:

code:
glClear(GL_COLOR_BUFFER_BIT);
// HERE  <-------------
glUseProgram(program);
glUniform1ui(uniform_id, 0);
glVertexAttribPointer(...);
glEnableVertexAttribArray(...);
glBlendFunc(src_blend, dst_blend);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);  // <--------- IS WHERE THIS
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);  // <------- AND THIS SHOULD BE
glEnable(GL_BLEND);
glDrawElements(GL_TRIANGLES, index_count, GL_UNSIGNED_SHORT, (void*)0);
glDisableVertexAttribArray(...);
// glBindBuffer(GL_ARRAY_BUFFER, some_other_buffer); // if I uncomment this line then the image no longer appears
glutSwapBuffers();
It seems one must bind the buffers before ... maybe the glVertexAttribPointer calls? Doing it the other way around doesn't provoke any sort of a reported error or anything, it just quietly doesn't work right. So without the commented out line the correct buffer remained bound from the previous repetition, and with trying to render two things, I got weird unpredictable results.

Spatial
Nov 15, 2007

Right, because the vertex attributes are associated with the vertex buffer which was bound at the time of the call to glVertexAttribPointer. Each attribute has its own vertex buffer handle which it sources data from. It doesn't change when you bind another vertex buffer afterwards.

I don't see any VAO (vertex array object) setup in your code. That's what the attributes and index buffer handle are stored inside. You're supposed to create your VAOs once at the beginning and then bind them when you want to draw. They control all your vertex state in just one call.

The code should look roughly like this:
C++ code:
// Setup (the order of creation doesn't matter, just the binds)
glGenVertexArrays( ... ); // Create VAO
glBindVertexArray( ... ); // Bind it

glGenBuffers( ... ); // Create vertex buffer
glBindBuffer( GL_ARRAY_BUFFER, ... ); // This binding is NOT part of the VAO state

glGenBuffers( ... ); // Create index buffer 
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, ... ); // This binding IS part of the VAO state

glVertexAttribPointer( ... ); // The bound vertex buffer is stored in this attribute now!
glEnableVertexAttribArray( .. ); // Turn on that attribute



// Unbind the buffers just to make the point
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 );
glBindVertexArray( 0 );



// Draw
glClear( ... );
glEnable( ... );

glUseProgram( ... );
glUniform1ui( ... );

glBindVertexArray( ... ); // Index buffer and attrib pointers are now bound
glDrawElements( ... ); // Wheeeeee
There's one oddity in there right? Why is the GL_ARRAY_BUFFER binding not stored in the VAO? It's because you can bind multiple different vertex buffers to the VAO and associate each attribute with a different one. Each attribute has its own handle so it would be redundant.

Spatial fucked around with this message at 23:24 on Oct 28, 2014

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Spatial posted:

I don't see any VAO (vertex array object) setup in your code. That's what the attributes and index buffer handle are stored inside. You're supposed to create your VAOs once at the beginning and then bind them when you want to draw. They control all your vertex state in just one call.
I think that's if you're targeting newer OpenGL than my target GPU level supports? Or is that something I can also do with an older version and it just doesn't complain if I don't?
(With a newer OpenGL target, on my other GPU, I do get a bunch of errors about not having a something or other, but I don't want to be coding for that target because the older version works on both GPUs.)

Edit: no, wait, I was misunderstanding you. I do create vertex array objects, I was just pseudo code showing my render function, not the initialization. What I was missing is that the VAO stores attributes and index buffer handles - I assumed the vertex array object just stored an array of vertices, like the name implies, and that I always had to re-bind everything else before calling a render.

So which things do I have to re-bind when rendering? The VAO, but not the index buffer, not the attributes, not the uniforms?
How about glBindTexture - affiliated with the VAO such that I can leave it alone, or not?
If I use the same shader with two different VAOs, and there is a uniform in that shader, can I set the uniform once for each VAO and then leave it alone, or is that value affiliated with the shader rather than the VAO?

roomforthetuna fucked around with this message at 02:25 on Oct 29, 2014

Spatial
Nov 15, 2007

Yeah it's pretty confusing. I made the exact same mistake at first. :)

VAO scope is fairly limited, it's purely a vertex setup thing. All that's stored is GL_ELEMENT_ARRAY_BUFFER_BINDING and these values for each attribute:
GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING
GL_VERTEX_ATTRIB_ARRAY_ENABLED
GL_VERTEX_ATTRIB_ARRAY_SIZE
GL_VERTEX_ATTRIB_ARRAY_STRIDE
GL_VERTEX_ATTRIB_ARRAY_TYPE
GL_VERTEX_ATTRIB_ARRAY_NORMALIZED
GL_VERTEX_ATTRIB_ARRAY_INTEGER
GL_VERTEX_ATTRIB_ARRAY_DIVISOR
GL_VERTEX_ATTRIB_ARRAY_POINTER


You still have to bind shaders and textures etc as you were doing before. Lame I know. Also it's important to remember that it doesn't keep track of the bound vertex buffer, only the index buffer. If you're only calling drawing functions it's fine but if you want to manipulate the vertex buffer data with glBufferSubData() or the like, you need to bind GL_ARRAY_BUFFER manually each time.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Spatial posted:

You still have to bind shaders and textures etc as you were doing before. Lame I know.
I'd be totally fine with that if it didn't have the few exceptions for stuff I don't have to bind or set explicitly, which means when I *do* set it explicitly but in the wrong order it gets unset implicitly. :)

So what's the deal with uniforms? If I'm using the same shader with a different uniform value to render two array buffers, is changing the uniform going to cause a "block until the previous operation is complete" like changing data in an array buffer would, such that I should instantiate two copies of the same shader program instead?

Spatial
Nov 15, 2007

roomforthetuna posted:

So what's the deal with uniforms? If I'm using the same shader with a different uniform value to render two array buffers, is changing the uniform going to cause a "block until the previous operation is complete" like changing data in an array buffer would, such that I should instantiate two copies of the same shader program instead?
Couldn't say. It seems like the sort of thing that would be easily buffered into a command stream by the driver and it's a really common use case. You would hope it would be optimised heavily. But then we're talking about OpenGL drivers so...

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Spatial posted:

Couldn't say. It seems like the sort of thing that would be easily buffered into a command stream by the driver and it's a really common use case. You would hope it would be optimised heavily. But then we're talking about OpenGL drivers so...
Thinking about it that way suggests it's probably good, yeah. I hadn't really considered what it is the drivers would be doing behind the scenes, and you're right, it seems like "update this one value" would be a queue-able command where "update this buffer with this big wad of data" would understandably be something quite different. Also that I've seen special mention of not trying to update buffers while they're in use, and have not seen special mention of not updating uniforms while they're in use.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Nothing in a GL driver ever blocks. It just copies.

shodanjr_gr
Nov 20, 2007

roomforthetuna posted:

It looks like I need to just ask this because outdated answers from the internet are no answers at all.

What's the least frustrating way to use OpenGL such that it will work with minimal effort cross-platform, ideally including both mobile devices and Intel GPUs that are several years old (and regular modern hardware too of course)? I'm not looking to do anything fancier than a bunch of single textured triangles with alpha blending and render targets (for a 2D game), so I don't need any kind of advanced shader functionality.

Is it going to end up being "use OpenGL ES for new hardware and write different code and different shaders to use an older version of OpenGL for old hardware", or will OpenGL ES work on older PC hardware, or...?

May be a bit of an overkill but you could use OpenSceneGraph....especially if you are not rolling out shaders.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Suspicious Dish posted:

Nothing in a GL driver ever blocks. It just copies.
Are you sure? I don't mean blocking in the code at call-time, but my understanding is that if you do "fill a buffer, render it, refill the same buffer, render it" without invalidating the buffer (which would make it really "fill a different buffer") then the "refill" operation can't start until the previous render has finished, so the graphics pipeline is blocked.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Drawing calls in GL are guaranteed to behave as if they were done in serial, with one waiting until the other is finished before performing. However, if the GPU can recognize that two calls can run in parallel without any observable effects (one render call renders to the top left of the framebuffer, the other to the bottom right), then it might schedule both threads at once.

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.

Suspicious Dish posted:

Nothing in a GL driver ever blocks. It just copies.

On a related note I fixed horrific stuttering (e.g. occasionally taking 200ms to draw a frame) in my OpenGL renderer at the weekend caused by me being an idiot and calling glFinish 4 times per frame. That certainly does block, but I now know you shouldn't ever need it.

The_Franz
Aug 8, 2003

While GL calls won't necessarily block, there are several calls that can force a CPU-GPU sync and silently kill your performance. For instance, calling any of the glGet* calls is bad so it's generally better to shadow this state yourself. Using good buffer management is also key since certain buffer management techniques are much faster than others. Using the 'naive' map/unmap approach when updating buffers is another way to kill your performance since it can cause a sync.

There have been some good presentations on OpenGL buffer management and performance over the last year:

The AZDO presentation from GDC.

A similar talk given at Steam Dev Days:
https://www.youtube.com/watch?v=-bCeNzgiJ8I

A writeup on modern buffer handling by AMD's OpenGL driver guy.

MarsMattel posted:

On a related note I fixed horrific stuttering (e.g. occasionally taking 200ms to draw a frame) in my OpenGL renderer at the weekend caused by me being an idiot and calling glFinish 4 times per frame. That certainly does block, but I now know you shouldn't ever need it.

I remember reading that some drivers just make glFinish a nop since most people don't actually use it correctly. If you want to enforce some kind of syncing these days it's better to manually set and block on fences.

The_Franz fucked around with this message at 16:21 on Nov 4, 2014

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
Yeah, I certainly didn't need it. It just seemed like a reasonable thing to do (e.g. after finishing the shadow map pass, the first pass of a deferred renderer) to make sure those stages were complete before the following stages attempted to use their output. The calls were in for months and months before they caused problems, its only recently as I've started rendering & uploading and releasing a lot more data that I started running into problems.

Which leads to a something I've been wondering about, I've implemented an 'infinite' voxel terrain renderer (https://www.youtube.com/watch?v=rhlt5HwfOhY) which requires uploading and releasing data pretty much constantly. Will that always cause slow downs or is there a way to organise things to avoid that? I had a quick look at that post by the AMD guy and it seems persistent maps would be better than my glBufferData calls?

The_Franz
Aug 8, 2003

Yeah, if you are generating a lot of dynamic geometry the current best thing to do is use a persistently mapped buffer. It is more work on the application side since you are responsible for manual memory management and fencing to make sure that you don't trample data that the GPU is currently working on, but the performance gains can be quite large.

There is a nice brief overview of how to do this starting at slide 83 of the AZDO presentation.

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
Right. Thanks for the help!

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The way to imagine it, because the OpenGL specification is specified in terms of this, and any observable difference of this behavior is a spec violation, is that whenever you call gl*, you're making a remote procedure call to some external rendering server. So you can batch up multiple glDrawElements calls, but whenever you query something from the server, you have to wait for the rest of everything to finish.

OpenGL is based on SGI hardware and SGI architecture from the 80s. If you're ever curious why glEnable and some other method with a boolean argument are two separate calls, it's because on SGI hardware, glEnable hit one register with a bit flag, and that other method hit another. glVertex3i just poked a register as well. You can serialize these over the network quite well, so why not do it?

Adbot
ADBOT LOVES YOU

brian
Sep 11, 2001
I obtained this title through beard tax.

Hey fellas, I've been doing a rare foray into graphics stuff and I've written a shader for Unity that does palette based textures using a palette indexed picture and a palette texture, it's entirely to do old palette shifting based effects and it's almost definitely horribly inefficient, but it works in a limited capacity but I had to add a constant I don't understand. Anyway here's the shader:

code:
[s][/s]
Shader "Unlit/Palette" 
{
	Properties 
	{
		_MainTex ("Base (Palette Indices)", 2D) = "white" {}
		_PaletteTex ("Palette (RGB)", 2D) = "white" {}
		_PaletteWidth ("Palette Width", Float) = 0
		_PaletteHeight ("Palette Height", Float) = 1
		_PaletteOffset ("Palette Offset", Float) = 0
		_PaletteLength ("Palette Length (all = 0)", Float) = 0
		_TransparentColourIndex ("Transparent Colour Index (-1 for none)", Float) = -1
	}

	SubShader 
	{
		Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" }
		LOD 100
	
		ZWrite Off
		Blend SrcAlpha OneMinusSrcAlpha 

		Pass 
		{  
			CGPROGRAM
				#pragma vertex vert
				#pragma fragment frag
			
				#include "UnityCG.cginc"

				struct appdata_t {
					float4 vertex : POSITION;
					float2 texcoord : TEXCOORD0;
				};

				struct v2f {
					float4 vertex : SV_POSITION;
					half2 texcoord : TEXCOORD0;
				};

				sampler2D _MainTex;
				sampler2D _PaletteTex;
				float _PaletteWidth;
				float _PaletteHeight;
				float _PaletteOffset;
				float _PaletteLength;
				float _TransparentColourIndex;
				float4 _MainTex_ST;
			
				v2f vert (appdata_t v)
				{
					v2f o;
					o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
					o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
					return o;
				}
			
				fixed4 frag (v2f i) : SV_Target
				{
					float4 col = tex2D(_MainTex, i.texcoord);
					unsigned int palIndex = floor((col.r * 255) + (col.g * 255) + (col.b * 255));

					_PaletteWidth = floor(_PaletteWidth);

					if(_PaletteLength == 0)
						_PaletteLength = _PaletteWidth;

					if(palIndex == floor(_TransparentColourIndex))
						return float4(0,0,0,0);

					float2 palettePosition = float2(((palIndex % floor(_PaletteLength)) + floor(_PaletteOffset)) % _PaletteWidth, 0);

					if(floor(palettePosition.x) == floor(_TransparentColourIndex))
						palettePosition.x = (palettePosition.x + 1) % _PaletteWidth;

					return tex2D(_PaletteTex, float2((palettePosition.x / _PaletteWidth) + 0.01, 0));
				}
			ENDCG
		}
	}
}

(the float inputs and floors are just to use the editor functionality and will go away when I do most of it in scripting)

So I was wondering why I have to add the +0.01 to the x of the tex2D call, if I have it without the +0.01 it misses one of the colours and everything is indexed wrong and I suspect if when I make it support multiple lines of palettes per file for power of 2 textures and whatnot, i'll have to add the same to the y. Any help would be fab.

Also is there a way to get the dimensions of a texture/sampler or do you really have to pass them in each time?

edit: I added support for square palettes and for some reason now I need to subtract 0.01 to the y component of the tex2D call instead of add it, but I still have to add an odd constant that's bound to go wrong when the palette textures gets big and I don't know why :(

brian fucked around with this message at 11:18 on Nov 5, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply