Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
newsomnuke
Feb 25, 2007

What does GL do when you try to upload a non-pow2 texture, without using the non-pow2 extension? When I do this, it works just as a pow-2 texture does, but:
(a) Does it upscale the texture to pow-2 internally, or leave it as it is
(b) Is the behaviour in a) dependent on GL version or anything else?
(c) Is there a way to get the width/height of an uploaded texture?

Adbot
ADBOT LOVES YOU

newsomnuke
Feb 25, 2007

I presume that it also runs in plain debug mode, rather than only in the debugger? If this is the case, then it could be a whole number of things. My money's on uninitialised variables - see this page for more info.

newsomnuke
Feb 25, 2007

How reliable are non-pow2 textures in OpenGL? They've been promoted to the core spec and I've never had any problem with them, but I've only had a very limited range of cards to test on (all nvidia). I've just come back to graphics coding after a pretty lengthy absence - are they still slow, or is the speed difference negligible?

newsomnuke
Feb 25, 2007

Ok, that's a bit more drastic than I was expecting, and pretty conclusive. I was actually only considering using one for a full-screen overlay (ie using one 1024x768 texture) because using a 1024x1024 is causing slight but noticeable scaling artifacts. I don't think this is going to impact performance - I'm more worried that drivers (especially ATI's) will be dodgy and give me the old white texture routine.

newsomnuke
Feb 25, 2007

OneEightHundred posted:

As far as I know, rectangle textures (see: ARB_texture_rectangle) give good speed on all hardware, you just have to deal with the fact that the coordinates aren't normalized and you can't mipmap them.
I've gone for this, thanks. I'm only using a small number of textures (less than 5), so I doubt performance will be a problem, but I've created a fallback just in case.

newsomnuke
Feb 25, 2007

This is a bit of a ridiculous question because it seems so basic, but: I can't get smooth scrolling in OpenGL (using ortho projection). It seems to be because the floating point coords are getting rounded down (which makes sense) and I wouldn't have thought it would be noticeable at 60 fps, but it is. When I use integer coords, everything is fine.

newsomnuke
Feb 25, 2007

Alcool Tue posted:

From my limited perspective on General Programming, properly declared float values aren't going to round themselves down unless you're turning them into ints OR you're running math with both ints and floats as values, which could either truncate your floats or add a .0 to your ints depending on some poo poo.
The rounding occurs when the positions are rasterised to screen, which is a set of discrete pixels.

newsomnuke
Feb 25, 2007

What's up with this vertex shader?

code:
void main(void)
{
	vec4 pos = vec4(gl_Vertex);
	pos.w = 1;

	gl_Position = gl_ModelViewProjectionMatrix * pos;
}
It only works when I set vertices' 'w' coordinate to 1 before sending them to the GPU, for instance:

code:
for (size_t i = 0; i < vertSize; i += 4)
{
	verts[i + 0] = x;
	verts[i + 1] = y;
	verts[i + 2] = z;
	verts[i + 3] = distance; // only works if 'distance' is 1
}

...
glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertSize * sizeof(GLfloat), verts, GL_STATIC_DRAW_ARB);
...
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexId);
glVertexPointer(4, GL_FLOAT, 0, 0);
...
glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_SHORT, 0);
The vertices end up on screen, run through the fragment shader properly, just their positions are completely wrong. Why isn't the 'w' component being reset properly?

newsomnuke
Feb 25, 2007

PDP-1 posted:

The w component of your vector should pretty much always be 1.
I know, which is why I reset it to 1 in the shader before applying the transform. But it doesn't seem to work!

edit: I'm using the 'w' component to store extra information which I use in the vertex shader, if you were wondering.

Adbot
ADBOT LOVES YOU

newsomnuke
Feb 25, 2007

edit: ok, it's working. Turns out the problem was actually in the fragment shader where I was doing something with gl_FrontColor which I shouldn't have been doing.

newsomnuke fucked around with this message at 13:15 on May 16, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply