Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
haveblue
Aug 15, 2005



Toilet Rascal

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

No, not really. It will be deprecated whenever they get around to releasing OpenGL 3.0.

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal

Entheogen posted:

How exactly does 3d texturing work. If I define 3d texture for a flat polygon, then that polygon sort of slices through that 3d texture and gets a 2d slice of it and maps it on polygon? Is that kind of how this works?

Yep. You assign each vertex 3 texture coordinates instead of 2, and it interpolates through a 3D texture space instead of a 2D plane. There are no limits on these coordinates, they don't have to form axis-aligned planes or anything, but you should avoid bent polygons in texture space just like in world space.

GL 3.0 will also have a major overhaul of the API, dropping the fixed-function pipeline and bringing in OOP, so it should at least be easier to use than the current system.

haveblue fucked around with this message at 15:24 on Jul 27, 2008

haveblue
Aug 15, 2005



Toilet Rascal

Entheogen posted:

what is a good OpenGL way to have transcluency for multiple objects? Perhaps I can turn off depth test and set this blend function?

code:
glBlendFunc( GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
I use this and it works somewhat well for many objects, but there are certain artifacts produced. Should I also render from back to front? The problem with that is that I would like to use display lists, and how would I render from back to front while also having rotating camera?

Yes, you usually need to render from back to front for 100% correct blending. There's no easy way around this, you need to depth-sort your transparent objects every frame. This isn't too expensive since you can use a distance-squared test since you only care about depth comparisons, not the actual values.

If you're worried about polygons within display lists drawing in the wrong order, turning on backface culling will get rid of 99% of that.

vvvvvvvv Yes, but rejecting per polygon is almost certainly faster.

haveblue
Aug 15, 2005



Toilet Rascal
What hardware are you on? There may be a vendor-specific shader extension you can use instead of the ARB feature.

haveblue
Aug 15, 2005



Toilet Rascal
You could also try pulling the rear clip plane in closer to increase the resolution of the depth buffer.

haveblue
Aug 15, 2005



Toilet Rascal
Remember that GL is designed as a client/server system. Using two cores is natural.

haveblue
Aug 15, 2005



Toilet Rascal

Mithaldu posted:

Another question about OpenGL: As far as i can see it only offers a spot light source, which i'd have to put really far away to emulate a global directional light. Is there any direct way to make a global directional light in OpenGL?

Set the w coordinate (fourth parameter of GL_POSITION) to 0 to get a directional light source (technically a point light at infinite distance).

haveblue
Aug 15, 2005



Toilet Rascal

Mithaldu posted:

I'm not sure if VBOs give me any advantage there.

VBOs would be an advantage for anything that's generated exactly once and never modified for the entire session. For any element that might be frequently rebuilt, they're a wash or worse.

haveblue
Aug 15, 2005



Toilet Rascal
Can't you request that software fallback be disabled when creating the context? That would cause it to error out instead of silently slowing down, right?

haveblue
Aug 15, 2005



Toilet Rascal

shodanjr_gr posted:

Using one framebuffer and alternating color attachments sounds very feasible to me. What i am not 100% sure about is whether you can read and write to the same texture inside the same shader call.

Pretty sure a texture can't be bound and set as the destination simultaneously.

haveblue
Aug 15, 2005



Toilet Rascal

Mithaldu posted:

I'm not sure why you're talking about voxels now. I've never mentioned them and am rendering a 3d grid of even-sized objects which are either cubes or more complex objects that can fit inside cubes.

He's talking about how visibility techniques intended for voxel objects could also be applied to a dense 3D grid of anything in general. A voxel world is just a 3D grid of objects where the objects all happen to be cubes.

haveblue
Aug 15, 2005



Toilet Rascal
For a second thing, the iPhone does not use shaders, so learning those at this stage won't be too helpful.

haveblue
Aug 15, 2005



Toilet Rascal
Maybe it's falling back to software? Can you request a hardware-only context and see if it refuses or fails to render?

haveblue
Aug 15, 2005



Toilet Rascal

dimebag dinkman posted:

This is a 2D rather than a 3D question, but - if I'm using OpenGL, what's the recommended way(s) of doing an integer scaling of the output to the screen (i.e. either normal size, or exactly double in both dimensions, or exactly triple, etc.), with no filtering (nearest neighbour scaling)? I do want this to be over the entire rendered output, not just individual components (so, for example, if something is rendered rotated at 45 degrees, its pixels still appear aligned to the edges of the screen at 4x scale).

Render the scene to an FBO appropriately smaller than the screen, render the result on a fullscreen quad with the filter set to GL_NEAREST, go hog wild.

haveblue
Aug 15, 2005



Toilet Rascal

not a dinosaur posted:

Right. This is what I usually do for integral scaling.

code:
void Application::resize(int screen_width, int screen_height)
{
    int scaling_factor =
        std::min(screen_width / target_width, screen_height / target_height);
    int render_width = target_width * scaling_factor;
    int render_height = target_height * scaling_factor;
    int render_x = (screen_width - render_width) / 2;
    int render_y = (screen_height - render_height) / 2;

    glViewport(render_x, render_y, render_width, render_height);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    glOrtho(0.0f, target_width, 0.0f, target_height, -1.0f, 1.0f);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
}
target_width and target_height set to whatever low resolution you intend to scale up.

edit: should note this will center the viewing area in the middle of the window

Well, yeah, he could do that, but he still has to scale the result up to fill the entire normal viewport, which is included in what I posted.

haveblue
Aug 15, 2005



Toilet Rascal

brian posted:

How do I make a smaller texture out of an existing loaded texture in openGL? I want to be able to turn a frame in a sprite sheet into a whole texture so I can repeat it over an area, I know it's fairly simple if it's just a single texture so i've been trying to work out how to seperate a section of an existing texture into a new texture handle. I've looked at glCopyTexSubImage2D but it seems to act on the read buffer, is there any way I can do this other than seperating the sprites in the spritesheet into seperate textures at load time?

You can change the read buffer to the source texture with the framebuffer object API.

haveblue
Aug 15, 2005



Toilet Rascal
Also, you stop doing multitexture by disabling GL_TEXTURE_* on all the texture units except 0, so it has to be per-unit.

haveblue
Aug 15, 2005



Toilet Rascal

Dicky B posted:

Direct3D beginner here. I've been playing around with vertex buffers but I haven't been able to find any good examples so I'm making a lot of assumptions about how I should be doing things.

I'm working on a simple application that just renders some different shapes to the sceen. To do this I create one big vertex buffer and add all the vertices for each shape to this buffer, and keep a list of all the shapes I've added (number of vertices, vertex size and primitive type), so when it comes to rendering everything I can go through the list and render each shape one by one by calculating the number of bytes to offset when reading from the buffer. I hope that makes sense.

Is this a good way of doing this or am I being retarded? If I understand correctly it's best to be using one vertex buffer (as opposed to a seperate buffer for each shape). If I want to start animating things the important thing is to minimize the number of locks/unlocks each frame, so I would send the vertices for all the shapes in one batch.

What if I want to dynamically change the number of shapes? I would need to also change the size of the vertex buffer. Is there a way of doing this or should I just release it and create a new one whenever I need more/less space?

If somebody could let me know if I'm on the right track or if I'm grossly misunderstanding anything it would be much appreciated! :)

How many objects are we talking about here? You won't realize a significant savings unless you are eliminating hundreds or thousands of locks per frame.

And it's going to become a huge headache when you want to be able to dynamically add and remove objects from the scene.

haveblue
Aug 15, 2005



Toilet Rascal

Contero posted:

What's with objects still being visible in OpenGL even with all of my lights turned off / set to zero? Is there some gl option to say "yes I really want complete darkness"? Or am I just doing something dumb?

The default value of the ambient material property is not zero, you're probably seeing that.

haveblue
Aug 15, 2005



Toilet Rascal
Or a PowerVR SGX, which is just on the cusp of appearing on store shelves if it isn't there already and is pretty much a lock for a future iPhone.

haveblue
Aug 15, 2005



Toilet Rascal
Don't clear the matrix between gluPickMatrix and gluPerspective, the pick matrix is supposed to modify the current projection.

I'm having trouble trying to use multitexture and and lighting together in OpenGL ES 1.1. When I set the first texture unit to modulate and the second to decal, it looks like the vertex colors are only modulating the first texture unit and the second is going straight through only modulated by its own alpha. Is there some trick I can do with the texture combiner to calculate C = C1*A1*Cf + C0*(1-A1)*Cf or is this impossible (in a single pass) in the fixed-function pipeline?

haveblue fucked around with this message at 18:59 on May 15, 2009

haveblue
Aug 15, 2005



Toilet Rascal
Everyone switched to either DX10 or GL ES.

haveblue
Aug 15, 2005



Toilet Rascal

Small White Dragon posted:

How would this work? I was under the impression that with a TRIANGLE_STRIP, each subsequent quad shared two vertices (and two texture coordinates) with the previous quad, whereas in this case the texture coordinates for a quad might not be adjacent to the previous quad.

Triangles, not quads. A triangle strip goes in a zigzag, so you place 2 corners of the first triangle, then the third corner, then a fourth corner which forms a second triangle which shares the 2nd and 3rd vertices with the first and has the side between them in common.

A triangle fan would have the same layout in memory as a normal quad, it would just be interpreted as, again, two triangles sharing a side and the 2 vertices that define it.

haveblue
Aug 15, 2005



Toilet Rascal
There is a separate iPhone dev thread that covers that a lot and seems to be a fan of cocoas2d.

haveblue
Aug 15, 2005



Toilet Rascal

Unparagoned posted:

I'd have to use ray tracing & allot of code to find out what the order should be. I think the performance hit doing this every frame would be too high.

I'm not sure what you're trying to do here but lots of games depth-sort their sprites in real time so we must both be missing something.

haveblue
Aug 15, 2005



Toilet Rascal

shodanjr_gr posted:

http://kotaku.com/5335483/new-cryengine-3-demo

Anyone got any info/links for the technique demonstrated by Cry-tek in the linked video?

At a guess from the "behind the scenes" section, it's a combination of the techniques for bloom and ambient occlusion. Each object must be producing a large color blob in an offscreen buffer somewhere which is then sampled by the surface shader for the environment.

e: or read the whitepaper, yeah

haveblue
Aug 15, 2005



Toilet Rascal

heeen posted:

Maybe go rant in the gameplay development thread?

Maybe go tell him off in the Kotaku thread he copied and pasted that from? And then we can all go back to discussing 3D graphics programming instead of YCSery?

I could barely follow that Crytek whitepaper, but just enough to see that my guess was completely wrong. I didn't think we had reached the point where a true volumetric effect like that was possible, I thought screenspace tricks were still a necessity.

haveblue
Aug 15, 2005



Toilet Rascal

slovach posted:

Thanks, I'll switch to CreateVertexDeclaration.


And is it normal for texture changes to pound my rear end so much? It's especially noticeable when I'm drawing a ton of sprites. Using only one texture my fps sits at almost 300 when drawing 40,000 something sprites. Alternating between 3 textures, it's floating around 100... that's a 6.67ms difference in render time. Obviously this is a bit extreme and in reality I'll never be trying to render so much at once, but just something I noticed when playing around.

If you do several thousand of them per frame, then yes. Try reorganizing the sprites so you can draw them in batches of the same texture without having to rebind.

haveblue
Aug 15, 2005



Toilet Rascal
Option C: Don't make a texture, use the vertex color attribute.

haveblue
Aug 15, 2005



Toilet Rascal
I'm really having trouble wrapping my head around duplicating the OpenGL lighting model in a shader in ES 2.0.

So far, I think the procedure goes like this (for a directional light):

-Transform the light vector by the modelview matrix, normalize the result, to get the eyespace light vector.
-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal.
-Take the dot product of the eyespace light vector and the eyespace normal vector clamped to 0 to get the diffuse contribution.
-multiply the diffuse contribution by the light color to get the diffuse component of the fragment.

I think something is wrong in the light vector transformation- when I display the eyespace normals they seem to be correct, but I can see the diffuse contribution of each vertex changing as I move the camera forward and back which doesn't seem like something that should happen.

I also can't find any good explanations of this online, if anyone has any (all the links I turn up are just OpenGL recipes, and following them doesn't seem to help).

haveblue fucked around with this message at 22:58 on Sep 23, 2009

haveblue
Aug 15, 2005



Toilet Rascal

Spite posted:

Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.

Looks like this was exactly it, thanks!

haveblue
Aug 15, 2005



Toilet Rascal
OK, now that eye space works, let's try tangent space :v:

I've implemented tangent space generation based on the code segment on this page, but a good chunk of the polygons are coming out as if the normal map is upside-down. Does this mean the handedness is not being handled properly, or is something else wrong?

Vertex shader:

code:
attribute vec4 position;
attribute vec4 color;
attribute vec2 texcoord;
attribute vec4 normal;
attribute vec4 tangent;
attribute vec4 binormal;

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 normalMatrix;
uniform mat4 textureMatrix;
uniform vec4 lightVector;
uniform vec4 lightColor;

varying vec4 colorVarying;
varying vec2 tcVarying;
varying vec4 tangentLightVector;

void main()
{
 gl_Position = projectionMatrix * modelViewMatrix * position;

 vec4 normalVarying = normalize(normalMatrix * normal);

 vec4 tangentEye = normalize(normalMatrix * tangent);

 vec4 binormalEye = normalize(normalMatrix * binormal);

//transform eyespace light to tangent space
 tangentLightVector.x = dot(lightVector, tangentEye);
 tangentLightVector.y = dot(lightVector, binormalEye);
 tangentLightVector.z = dot(lightVector, normalVarying);

 tangentLightVector.w = 0.0;
 tangentLightVector = normalize(tangentLightVector);

 colorVarying = color;
 vec4 fullTexture = textureMatrix*vec4(texcoord.x, texcoord.y, 0, 1);
 tcVarying = vec2(fullTexture.x, fullTexture.y);
}
e: This is the result of dot(texture2D(normal map, tcVarying), tangentLightVector) in the fragment shader. Does anything obvious leap out at anyone?

haveblue fucked around with this message at 20:54 on Sep 29, 2009

haveblue
Aug 15, 2005



Toilet Rascal

Spite posted:

Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.

Maybe, but I think I've duplicated the method on the page I linked. The only change I made was generating the bitangent beforehand as a vertex attribute, not putting in the shader as they suggest.

This is the tangent/bitangent generator, in case there's something in it I missed:

code:
{
 //calculate the tangent of each vertex with this code I found on the internet
 
 GLfloat *tan1 = (GLfloat *)malloc(Mesh_VertexCount*3*sizeof(GLfloat));
 GLfloat *tan2 = (GLfloat *)malloc(Mesh_VertexCount*3*sizeof(GLfloat));
 int currentTangent = 0;
 GLfloat *vertex;
 GLfloat *texcoord;
 for(int i=0;i<Mesh_VertexCount;i+=3)
 {
  vertex = vertPointer+(i*elementStride);
  texcoord = vertPointer+(i*elementStride)+6;
  
  float x1 = vertex[3] - vertex[0];
  float x2 = vertex[6] - vertex[0];
  float y1 = vertex[4] - vertex[1];
  float y2 = vertex[7] - vertex[1];
  float z1 = vertex[5] - vertex[2];
  float z2 = vertex[8] - vertex[2];
  
  float s1 = texcoord[2] - texcoord[0];
  float s2 = texcoord[4] - texcoord[0];
  float t1 = texcoord[3] - texcoord[1];
  float t2 = texcoord[5] - texcoord[1];
  
  float r = 1.0/((s1*t2) - (s2*t1));
  
  Vector3 sdir = Vector3(((t2*x1) - (t1*x2))*r, ((t2*y1) - (t1*y2))*r, ((t2*z1) - (t1*z2))*r);
  Vector3 tdir = Vector3(((s1*x2) - (s2*x1))*r, ((s1*y2) - (s2*y1))*r, ((s1*z2) - (s2*z1))*r);
  
  memcpy(tan1+(i*3), &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan1+(i*3)+3, &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan1+(i*3)+6, &(sdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3), &(tdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3)+3, &(tdir.x), sizeof(GLfloat)*3);
  memcpy(tan2+(i*3)+6, &(tdir.x), sizeof(GLfloat)*3);
 }
 
 //now calculate the actual tangent of each vertex
 
 tangentPointer = (GLfloat *)malloc(Mesh_VertexCount*4*sizeof(GLfloat));
 bitangentPointer = (GLfloat *)malloc(Mesh_VertexCount*4*sizeof(GLfloat));
 
 for(int i=0;i<Mesh_VertexCount;i++)
 {
  Vector3 n = Vector3(vertPointer[(i*elementStride)+3], vertPointer[(i*elementStride)+4], vertPointer[(i*elementStride)+5]);
  Vector3 t = Vector3(tan1[i*3], tan1[(i*3)+1], tan1[(i*3)+2]);
  Vector3 t2 = Vector3(tan2[i*3], tan2[(i*3)+1], tan2[(i*3)+2]);
  
  //perform gram-schmidt orthogonalize
  Vector3 tangent = (t - (n * n.Dot(t)));
  tangent.Normalize();  
  memcpy(tangentPointer+(i*4), &(tangent.x), sizeof(GLfloat)*3);

  //calculate handedness
  tangentPointer[(i*4)+3] = ((n.Cross(t)).Dot(t2) < 0.0) ? -1.0 : 1.0;
  //calculate the bitangent by crossing the tangent with the normal and scaling by the handedness
  Vector3 bitangent = n.Cross(tangent)*tangentPointer[(i*4)+3];
/*  bitangent.x *= tangentPointer[(i*4)+3];
  bitangent.y *= tangentPointer[(i*4)+3];
  bitangent.z *= tangentPointer[(i*4)+3];*/
  memcpy(bitangentPointer+(i*4), &(bitangent.x), sizeof(GLfloat)*3);
  bitangentPointer[(i*4)+3] = 1.0;
 }
 
 free(tan1);
 free(tan2);

haveblue
Aug 15, 2005



Toilet Rascal

Stanlo posted:

Try creating test normal maps, like one that does nothing but straight normals and you should get regular smooth shading. If you don't, something's wrong. Then just work onto the other directions to verify. I've probably hosed up tangent space normal mapping a bajillion times, feel free to pm me if you're having difficulty.

If I replace the normal map lookup with vec3(0,0,1), I do indeed get smooth shading. So it's got to be something involving texture coordinates.

haveblue
Aug 15, 2005



Toilet Rascal
It's an interleaved array of consecutive triangles. At each stride there is a position followed by a normal followed by a texcoord (hence the elementStride+6 instead of +3).

haveblue
Aug 15, 2005



Toilet Rascal
I should be calculating the same handedness for each vertex of a polygon and if I don't then something is terribly wrong, correct?

e: Yes, something was terribly wrong.


code:
  vertex = vertPointer+(i*elementStride);
  texcoord = vertPointer+(i*elementStride)+6;
  
  float x1 = vertex[3] - vertex[0];
  float x2 = vertex[6] - vertex[0];
  float y1 = vertex[4] - vertex[1];
  float y2 = vertex[7] - vertex[1];
  float z1 = vertex[5] - vertex[2];
  float z2 = vertex[8] - vertex[2];
This did not match the layout of my vertex array. I correctly advanced by strides to get the base vertex pointer, but the 9 elements following it are not 3 3-component positions, it's a 3-component position, a 3-component normal, a 2-component texcoord, and a bit of the next vertex. After fixing that, all the rest fell into place.

haveblue fucked around with this message at 21:04 on Oct 7, 2009

haveblue
Aug 15, 2005



Toilet Rascal

Mata posted:

to fix this I would have to make a new vertex for each unique permutation of coordv/coordt/coordn which is not only a pain in the rear end to program but will also bloat the size of my models to hell.

Unfortunately this really is what you have to do. Each vertex in OpenGL has exactly one index which is used for all the attribute arrays. You should be able to have your 3D package generate models that already have that property and not have to worry about generating it yourself, although you're right about the increased footprint.

What kind of models are you making that you have significant redundancy of normals?

haveblue
Aug 15, 2005



Toilet Rascal

Martman posted:

I'm working on a game engine for a class project (we weren't specifically assigned this project, we chose it, so I don't really see it as an academic integrity issue to get help with this certain issue), and we're using OpenGL.

The idea is that the interface is essentially something like Diablo 2 or most RTSes; i.e. isometric view, movement is done by mouse clicks. The ground is made up of square tiles, each tile having four corners whose heights are read from a map file.

Here's a simple mockup of the scenario we're wondering about :



I just wanted to make it clear, but I guess the problem is kind of simple: essentially we're wondering how to find, based on where you've clicked on the screen, exactly which pixel on the map you should be trying to move to. We can use picking to find the specific tile that was clicked on, but otherwise we're a bit confused... we have the location and angle of the camera and the definition of the 2D plane that would represent the appropriate tile, so I feel like it should be possible to find the intersection between the vector defined by the click and the tile, but I guess I'm not sure exactly how to describe the vector. Or maybe this is the wrong way to go about it?

You can use glProject to find the world space vector of the camera click (the function gives you the location on the near clip plane, just subtract the camera position from that). From there, you can use the camera location and that vector to find the intersection with the tile plane. The obvious way to do that would be to find the distance of the camera from the plane (plug the camera location into the plane equation) and then scale the click vector by (that distance / camera vector magnitude) / (cosine of angle between camera vector and plane normal) and add it to the camera position. There's probably a more efficient formula, that's just off the top of my head.

haveblue fucked around with this message at 07:09 on Feb 25, 2010

haveblue
Aug 15, 2005



Toilet Rascal

slovach posted:

I'm having an issue with rotating a matrix.

Over time my matrix seems to get wonky and my object will get skewed. I'm pretty sure it's not a problem with my math, but how should I go about avoiding this? The faster / more it rotates, the faster it distorts.

http://www.mediafire.com/?4h2ljzmymcm

Don't repeatedly add small incremental rotations to the same matrix, store the angle as a float and regenerate the whole matrix each time.

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal

PDP-1 posted:

Is there any easy way to get rid of this unwanted projection tilt? I suppose I could render the little axes to their own texture and then draw it as a sprite on top of the rest of the image, but it just feels like there's a simpler solution that I'm missing.

Rather than setting up a whole new framebuffer, you can just change the viewport to a small rectangle in the corner and then the perspective will look correct.

However, you probably also want to draw the axes with an orthographic projection to eliminate all distortion permanently.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply