Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Illusive gently caress Man posted:

I need to implement shadow mapping for the last part of a class project, and I totally understand it except for one thing. How do you create the frustum / projection+view matrices for the light source so that it covers everything in the camera's frustum? Everything I read seems to just skip over this.

As a dead dane once said, "There's the rub."

In the simplest case, you don't -- just create the light/shadow frustum so that it covers the entire scene (or at least a reasonable area around the player) and use that. This guarantees you're covering all the occluders your viewer might see, but has the substantial downside of not allocating your shadow map texels very efficiently, leading to bad aliasing.

A large portion of shadow mapping research has been dedicated to dealing with this problem. Most games use an adaptation of this called "Cascaded Shadow Maps" [1] where you render the shadow map multiple times from the same viewport with different sized frustra. This gives you multiple levels of detail which you can pick from based on how far the scene location is from the viewpoint -- distant objects use a larger frustrum (lower texel density) while nearer objects use a shadow map with the same resolution but smaller frustrum (higher texel density). This works pretty well, and can be improved upon by a variety of techniques that let you adjust the near/far bounds of each cascade based on the content, and fit it more perfectly to the actual camera frustrum.

There are other perspective-warping techniques that use a single shadow map [2]. These work really well in some circumstances, but have points where they fail badly in the general case. They can be a good option if you have a lot of control over the scene.

Finally, there's some real "blue-sky" research into just eliminating the shadow frustum altogether and sampling only points which correspond to actual screen samples. Irregular Shadow Maps [3] is a neat idea, but the cases where it really provides the best benefit can usually be solved almost as well with much less expensive methods.

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307(v=vs.85).aspx
[2] http://http.developer.nvidia.com/GPUGems/gpugems_ch14.html
[3] http://visual-computing.intel-research.net/publications/papers/2009/izb/soft_shadows_larrabee.pdf

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
If the question was how you make that matrix in the first place, then there is no one answer, but I'll try to articulate how you get answers:

The viewable area of the world is a 6-sided volume, it starts at a 4-sided poly that is your viewport (at the near plane) and ends at a 4-sided poly that is the viewport projected out to the far plane.

Your goal, whether it's for that or a shadow cascade, is to create another volume that encloses the entire frustum, which basically means enclosing the 8 points of the near plane and far plane. There are infinitely many volumes that will do that. I think most of them boil down to:
- Determine the forward and side vectors of your shadow viewport. This can virtually anything, I think it's usually done by taking the light direction and just generating 2 perpendicular directions.
- Generate a shadow near plane and far plane by taking the nearest and farthest points of the view frustum along the light direction
- Determine the size of the shadow viewport by tracing all 8 points back on to the shadow near plane and enlarging it to fit all 8.

There are variations on that as well. One of the problems with doing that is that it makes the shadow near/far plane sizes dependent on your view orientation, which causes sparkling artifacts. DICE recommended using a fixed-size viewports that could encompass any orientation and rounding to texel boundaries, which has the effect of generally eliminating that.

OneEightHundred fucked around with this message at 17:35 on May 3, 2012

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender
I thought I had it figured out (just making it a big fixed size that would cover the frustum from any angle), but as I started implementing it I realized I had no idea what to do in the situation of a point light inside the camera frustum. Screw it, I'm just going to make everything into spotlights with predefined frustums instead.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
There are only two real solutions for point lights:
- Cast separate shadowmaps for objects (or clusters of them) being hit by the light.
- Use a cubemap.

The first usually produces higher-quality results, but can't deal with lights inside shadow targets and also causes a ton of state thrash.

Secret third option: Don't cast shadows from point lights, which is much more common and viable than you'd initially think. Most artificial light sources are embedded in something, which makes them inherently directional.

OneEightHundred fucked around with this message at 00:15 on May 5, 2012

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

There are only two real solutions for point lights:
- Cast separate shadowmaps for objects (or clusters of them) being hit by the light.
- Use a cubemap.

The first usually produces higher-quality results, but can't deal with lights inside shadow targets and also causes a ton of state thrash.

Secret third option: Don't cast shadows from point lights, which is much more common and viable than you'd initially think. Most artificial light sources are embedded in something, which makes them inherently directional.

Dual paraboloid shadow maps are worth looking at too

czg
Dec 17, 2005
hi
I'm toying around with trying to learn some DirectX 11 using SlimDX, and I'm having some trouble getting my instanced cubes to behave as expected.
What I want to happen is have a single 2x2x2 cube centered on the origin be rendered multiple times in a grid and scaled up. To do this I'm passing in a 4x4 transformation matrix per instance to the vertex shader and just multiplying it with the vertex position.
At the moment I'm not even trying to make them show up correctly in the viewport, just trying to get the post-vertex shader vertex values to be correct in PIX.


Here's my setup of the instance buffer and the inputlayout for the vertex shader:
C# code:
this.InstanceBuffer = new SlimDX.Direct3D11.Buffer(Device, new BufferDescription()
{
  BindFlags = BindFlags.VertexBuffer,
  Usage = ResourceUsage.Dynamic,
  SizeInBytes = Renderable.MAX_INSTANCES * 4 * 4 * sizeof(float), //MAX_INSTANCES 4x4 Matrices
  CpuAccessFlags = CpuAccessFlags.Write
});
InstanceBufferBinding = new VertexBufferBinding(InstanceBuffer,4*4*sizeof(float),0);

/* Some more stuff, compiling the shaders, etc... */

var elements = new[] 
{ 
  new InputElement("POSITION", 0, SlimDX.DXGI.Format.R32G32B32_Float, 0),
  new InputElement("NORMAL", 0, SlimDX.DXGI.Format.R32G32B32_Float, 0),
  new InputElement("TEXCOORD", 0, SlimDX.DXGI.Format.R32G32_Float, 0),
  //The instance matrices:
  new InputElement("POSITION", 1, SlimDX.DXGI.Format.R32G32B32A32_Float, 0, 1, InputClassification.PerInstanceData, 1), 
  new InputElement("POSITION", 2, SlimDX.DXGI.Format.R32G32B32A32_Float, 0, 1, InputClassification.PerInstanceData, 1),
  new InputElement("POSITION", 3, SlimDX.DXGI.Format.R32G32B32A32_Float, 0, 1, InputClassification.PerInstanceData, 1),
  new InputElement("POSITION", 4, SlimDX.DXGI.Format.R32G32B32A32_Float, 0, 1, InputClassification.PerInstanceData, 1)
};

WorldInputLayout = new InputLayout(Device, inputSignature, elements);
And then later in my rendering loop I do this:
C# code:
VertexBufferBinding[] vertexBuffers;
SlimDX.Direct3D11.Buffer indexBuffer;
int numIndexes, numInstances = 0;
//Gets a simple cube of 8 vertices and 18 indexes:
RenderableMesh.GetBuffers(out vertexBuffers, out indexBuffer, out numIndexes); 

context.InputAssembler.SetVertexBuffers(0, vertexBuffers);
context.InputAssembler.SetVertexBuffers(3, this.InstanceBufferBinding);
context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R32_UInt, 0);

DataBox dbox = context.MapSubresource(this.InstanceBuffer, MapMode.WriteDiscard, MapFlags.None);
DataStream d = dbox.Data;
d.Position = 0;

for (int i = 0; i < chunks.Length; i++) //Chunks = objects containing my cube instance info
{
  d.Write(Matrix.Transpose(chunks[i].TransformMatrix));
  numInstances++;
}
context.UnmapSubresource(this.InstanceBuffer, 0);

context.DrawIndexedInstanced(numIndexes, numInstances,0,0,0);
And finally here's my vertex shader:
code:
cbuffer MatrixBuffer
{
  matrix viewMatrix;
  matrix projectionMatrix;
};

struct VS_INPUT
{
  float4 position : POSITION0; 
  float4 normals  : NORMAL;
  float2 uvs      : TEXCOORD0;
};

struct PS_INPUT
{
  float4 position : SV_POSITION; 
  float4 normals  : NORMAL;
  float2 uvs      : TEXCOORD0;
};

PS_INPUT VertShader(VS_INPUT input, matrix instancePos : POSITION1)
{
  PS_INPUT output;

  input.position.w = 1.0f;

  output.position = mul(input.position, instancePos);
  output.position = mul(output.position, viewMatrix);
  output.position = mul(output.position, projectionMatrix);

  output.normals = input.normals;
  output.uvs = input.uvs;
  return output;
};
So far so good. Though when inspecting a frame in PIX, my instance buffer contains all the expected data like this:



But the vertex shader gets passed what seems like garbage data, or mixed up with the normals or something:




I honestly have no idea what I'm doing wrong, so if anyone has any thoughts I'd be very grateful. I've tried debugging a single vertex, but the maths all make sense except there's junk values being passed in. Googling around hasn't turned up anything helpful, and the tutorials I've been looking at don't seem to be doing anything different.

Disclaimer: I'm a complete novice when it comes to actually writing rendering code. Also there are probably tons of coding horrors all over the place, but I haven't really got any clear goals for this yet so I 'm not sure how I'm going to organize everything.

czg
Dec 17, 2005
hi
A night's sleep later and I figured out you of course have to have a vertexbuffer binding for each of the 4 float4 inputs in a matrix, not just the first one.

The Glumslinger
Sep 24, 2008

Coach Nagy, you want me to throw to WHAT side of the field?


Hair Elf
Can someone link me to some good articles on forward and deferred rendering? I mean, I understand them in general terms, but not really in terms of how one would go about implementing them if they wanted to.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

crazylakerfan posted:

Can someone link me to some good articles on forward and deferred rendering? I mean, I understand them in general terms, but not really in terms of how one would go about implementing them if they wanted to.
I'm not sure how their implementations aren't clear since the difference between them pretty much is the implementation.

Forward rendering essentially means that if you have a surface, you draw that surface, and if you want more lights on that surface, you either need to set them up as uniforms when you draw the surface, or draw the surface some more times with additive blending or something.

Deferred rendering means that basic parameters of the surface, like normal, position, glossiness, etc. are all rendered out to a screen-space texture, and lights are applied after by picking regions of the screen and adding the light influence to them using the material parameter textures. The key difference here is that deferred rendering operates on screen-space textures, not on meshes.

The difference is essentially that forward rendering scales very badly with light count, you either eat a ton of overdraw from rendering parts of meshes that aren't affected, a ton of CPU cost in culling lights, or both. You also have to requeue geometry, spam state changes to update the light list on light passes, and so on.

The advantages of forward rendering are mainly that its best-case scenario (low or very predictable number of dynamic lights) is much better, MSAA is still better than AA post-process filters, and it doesn't have problems with transparency. (Deferred rendering has problems with transparency because deferred innately assumes that each pixel has exactly one position and set of shading properties, which isn't true if something translucent occupies the same pixel. Deferred solutions usually wind up using forward rendering for translucency, or design things in a way that the game has no translucent objects that are shaded.)

Also, deferred is extremely geared toward having a consistent shading model, which doesn't work so well for things with unusual shading behavior (i.e. most skin shaders).

OneEightHundred fucked around with this message at 02:08 on Jun 8, 2012

BlockChainNetflix
Sep 2, 2011
I'm trying to deform a sphere in the vertex shader, but am seeing the following when I generate the normals



At each vertex, I'm generating the normal by doing the following:

-Rotate the vertex position by a small offset in 4 directions: positive x, negative x, positive y, negative y (using 4 uniform rotation matrices).

-Deform these offsets with the same function the vertex is deformed by.

-Do a cross product on the offsets of (negx - posx) x (negy-posy) to generate a normal.

-If the dot product of the normal and the eye vector < 0, flip the normal with normal = -normal.

This looks fine most of the time, except for certain parts of the boundary between positive and negative z normals, it's creating a sort of seam or zipper effect with inverted normals, and I can't figure out why.

Here's another image highlighting flipped normals in red.


Little help?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

BLT Clobbers posted:

-If the dot product of the normal and the eye vector < 0, flip the normal with normal = -normal.
Why are you doing that?

quote:

This looks fine most of the time, except for certain parts of the boundary between positive and negative z normals, it's creating a sort of seam or zipper effect with inverted normals, and I can't figure out why.
What are you doing with the normals? Could you post the shader, or at least what's after the normal generation?

A common cause of this is using cubemaps without clamp-to-edge enabled, but I can only guess without the shader code.

Shameproof
Mar 23, 2011

How often does a vertex need to be updated in order to merit being put in a dynamic buffer?

BlockChainNetflix
Sep 2, 2011

OneEightHundred posted:

Why are you doing that?

It's an issue with using the cross to create the normal, if the vertex is pointing away from us, the normal is flipped. If I change the offsets to y and z rotations, I need to flip the x. Although you've made me realise I don't need to do a dot, I can just test if z<0

quote:

What are you doing with the normals? Could you post the shader, or at least what's after the normal generation?

here's the code, I've removed the deformation stuff

code:
//4 offset matrices (mat4 since android has no mat3 helper functions)
uniform mat4 uYPMatrix;
uniform mat4 uYNMatrix;
uniform mat4 uZPMatrix;
uniform mat4 uZNMatrix;

//vertex position
attribute vec4 apos;

varying vec4 vpos;
varying vec4 eyepos;
varying vec3 vnorm;
varying vec3 eyenorm;

void main()
{
        //if we use rnorm as our normal it looks fine
	vec3 rnorm = normalize(apos.xyz);

        //create 4 offsets, we need 4 separate matrices since -mat does nothing
	vec3 nx1= (rnorm * mat3(uYPMatrix)) ;//positive x
	vec3 nx2= (rnorm * mat3(uYNMatrix)) ;//negative x
	vec3 ny1= (rnorm * mat3(uZPMatrix)) ;//positive y
	vec3 ny2= (rnorm * mat3(uZNMatrix)) ;//negative y

        //deformation stuff happened here

        //subtract the offsets to form a cross over the deformed vertex
 	nx1 = nx2-nx1;
 	ny1 = ny2-ny1;

        //do a cross product to generate the normal
 	vnorm = normalize(cross(nx1,ny1));


        //if the vertex is pointing away from us, the generated normal is flipped, so flip it.
 	if(rnorm.z<0.0)
 		vnorm  = -vnorm;

        //transform the normal
   	eyenorm = normalize(uNMatrix * vnorm);

   	 //transform the position
 	vpos = uMVPMatrix * apos.xyz;
 	gl_Position = vpos;
}

/*code to generate the matrices uniforms
	//how much to offset by
	float scale = 0.1f;
	Matrix.setIdentityM(yn, 0);
	Matrix.rotateM(yn, 0, scale, 1.0f, 0.0f, 0.0f);

	Matrix.setIdentityM(yp, 0);
	Matrix.rotateM(yp, 0, -scale, 1.0f, 0.0f, 0.0f);

	Matrix.setIdentityM(zn, 0);
	Matrix.rotateM(zn, 0, scale, 0.0f, 1.0f,  0.0f);

	Matrix.setIdentityM(zp, 0);
	Matrix.rotateM(zp, 0, -scale, 0.0f, 1.0f,  0.0f);
*/
Here's an image rendering just the normals
gl_FragData[0] = vec4(vnorm.xy+0.5 ,vnorm.z ,1.0);



As a test, I'm going to add another 2 offsets in a z rotation, and then do a cross on the two longest vectors, just in case one of the current offset vectors is coming up short for some reason.

BlockChainNetflix
Sep 2, 2011
Nevermind! It looks like this is a problem with my sphere generator. Thanks anyway.

Shameproof
Mar 23, 2011

Shameproof posted:

How often does a vertex need to be updated in order to merit being put in a dynamic buffer?

Nobody has any idea? I just need a power of ten. I've heard that if something changes less than every ten frames you should go static.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shameproof posted:

Nobody has any idea? I just need a power of ten. I've heard that if something changes less than every ten frames you should go static.
D3D10 docs suggest that D3D10_USAGE_DYNAMIC should be used if the data is updated "at least once per frame"

As usual, it depends on the drivers. It's possible that drivers are smart enough to leave dynamic buffers in VRAM if it detects that they're not being updated frequently, but who knows.

Also if you're using static buffers then make sure your locks/maps are flagged with the WRITEONLY usage flag or your performance will tank. If you're using OpenGL then BufferData/BufferSubData may also work.

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I've been using GLEW for my OpenGL project. I originally followed the tutorial on the OpenGL.org wiki, which says you have to create a temporary 2.1 context with wglCreateContext, use that to call wglCreateContextAttribsARB to create a 3.x context, then delete the temporary one. Apparently doing it that way makes gDEBugger crash when you hit a breakpoint/pause.

I tried removing the temporary context part and discovered that wglCreateContext was already giving me a 3.3 context. And gDEBugger stopped crashing. Is not jumping through the extra hoop going to bite me in the rear end or is the tutorial just wrong?

Also, I got paranoid and decided to give GL3W a try, hopefully avoiding the problem, but Visual Studio gives me this nice message at runtime as soon as it hits a gl call:

quote:

Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention.

I'm probably going to just go back to GLEW, so it isn't critically important, but I'm curious what I messed up since I've never seen that message before. I'm assuming I didn't set it up right and/or there's someplace that I should be wrapping up with extern "C" {} that I missed or something?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Shameproof posted:

Nobody has any idea? I just need a power of ten. I've heard that if something changes less than every ten frames you should go static.

If you have any question as to how often it will be updated AND the size of the resource is not "large" relative to PCIE bandwidth (i.e. most geometry buffers, some small-medium textures) just make it Dynamic.

FlyingDodo
Jan 22, 2005
Not Extinct
In the asf skeleton animation file format, what is the purpose of the 'axis' data for each bone? I have managed to draw a static skeleton from an asf file without using it. I'm assuming it must be used for animation, but I can't work out how.

Rahu
Feb 14, 2009


let me just check my figures real quick here
Grimey Drawer
I see this thread isn't very active, but I'm hoping someone can help me out with my terrible question. I'm trying to set up a very simple thing with glut (freeglut) that just makes a window with an ortho projection and draws a texture across the whole viewport.

My problem is that my textured quad always appears white even though my texture data is just an array of 0s, which seems like it should be black. Sorry for the dumb question, but I can't seem to find what I'm missing here.

http://pastebin.com/vUxRUCth

haveblue
Aug 15, 2005



Toilet Rascal
The GL_TEXTURE_2D target requires dimensions that are a power of 2, so trying to use a 640x480 image won't work.

pseudorandom name
May 6, 2007

GL has supported non-power-of-two textures for a decade now.

The problem is that the texture is entirely transparent.

haveblue
Aug 15, 2005



Toilet Rascal
Don't you have to explicitly enable that with GL_TEXTURE_RECTANGLE, though?

Also, he didn't turn on anything that would involve the alpha channel.

Rahu
Feb 14, 2009


let me just check my figures real quick here
Grimey Drawer
I was looking at stuff about the size, and from what I could tell opengl should be able to handle non-power-of-two textures without doing anything on remotely-modern hardware.

How do you come to believe it's transparent? I know I'm using RGBA, but without explicitly enabling alpha blending i assumed the alpha channel would do nothing. I changed everything to RGB instead of RGBA and have the same issue.

pseudorandom name
May 6, 2007

Rahu posted:

I was looking at stuff about the size, and from what I could tell opengl should be able to handle non-power-of-two textures without doing anything on remotely-modern hardware.
That's also my understanding of ARB_non_power_of_two.

Rahu posted:

How do you come to believe it's transparent? I know I'm using RGBA, but without explicitly enabling alpha blending i assumed the alpha channel would do nothing. I changed everything to RGB instead of RGBA and have the same issue.

That was a wrong assumption on my part, sorry.

Visible Stink
Mar 31, 2010

Got a light, handsome?

Try setting your texture parameters after binding your texture, like this:
code:
	glGenTextures(1, &TEX_NAME);
	glBindTexture(GL_TEXTURE_2D, TEX_NAME);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, tex_data);

	// Prepare texture-related nonsense
	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

	glutMainLoop();
I can't give you any reason why this should make a difference but it worked for me on my machine (Windows 7 64 bit, Visual Studio 2010).

pseudorandom name
May 6, 2007

Because texture 0 is bound by default, and he binds texture 1 after altering texture 0's parameters.

(glPixelStorei() needs to be called before glTexImage2D, though.)

Rahu
Feb 14, 2009


let me just check my figures real quick here
Grimey Drawer
Thanks, didn't realize those were texture specific. Thought they would apply for all textures after they were set.

Toekutr
Dec 9, 2008
Can anyone point me to any good resources (website preferred, but a book is fine too) for learning the math needed for 3d graphics stuff?

I've written a raytracer and a some simple software renderers, but I've mostly relied on tutorials to figure out the matrices and complex linear algebra. I'm working on something much more complex now and I'd like to understand how it actually all works, rather than tweaking stuff until I get something on-screen.

EssOEss
Oct 23, 2006
128-bit approved
I am trying to wrap my head around Direct3D 11 programming, having never before done any graphics work that goes beyond directly shoving pixel color values into some memory location.

Please confirm whether my understanding of resource management is correct. The way I understand it, I should bind the resources that I need for one (or ideally more) drawing operation at a time; after executing that draw operation, I should put them aside and bind the resources I need for the next operation (assuming they are different).

Is this correct? It feels right, but I have never worked with an API quite like this before, so I am not really sure.

Also, what are some good Direct3D 11 books or online guides that are generally thought well of?

EssOEss fucked around with this message at 21:38 on Sep 1, 2012

RocketDarkness
Jun 3, 2008
Howdy, fellow programmer Goons! I'm fairly new to OpenGL and experimenting with various aspects of it. Right now, I'm doing some basic rendering of a handful of textured quads and the legendary teapot. However, I noticed that, depending on the order in which the various entities were being rendered, I received undesired results. Here are a few quick screenshots to demonstrate the problem. Clearly, even if the background is being rendered last, the Z-buffer should be sorting it properly regardless of render order. However, that is not happening, and it leads me to believe I have flubbed somewhere.

With Background - Rendered Last


No Background


I'm currently using SFML to assist with a few minor things so that I can focus on specific aspects at the moment.

Here is the InitRendering() function which is called shortly after boot-up. It creates the SFML window and handles general OpenGL setup and image loading.

code:
bool InitRendering()
{
	// Create the main window
	gp_app = new sf::Window( sf::VideoMode(400, 400, 32), "SFML OpenGL" );
	cam_pos = RsdVector3( 0.0f, 0.0f, 0.0f );

	// Set color and depth clear value
	glClearColor(0.7f, 0.9f, 1.0f, 1.0f);

    // Enable Z-buffer read and write
    glEnable(GL_DEPTH_TEST);
	glClearDepth(1.0f);
	glDepthMask(GL_TRUE);
	glDepthFunc(GL_LEQUAL);
	glDepthRange(0.0f, 1.0f);

	glEnable(GL_COLOR_MATERIAL);
	glEnable(GL_BLEND);
	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //Set the blend function
    glDepthMask(GL_TRUE);

    // Setup a perspective projection
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluPerspective(90.f, 1.f, 1.f, 500.f);

	/// Load and Store Textures
	glEnable( GL_TEXTURE_2D );
	const sf::Uint8* p_pixels;

	/// Teapot Texture
	sf::Image img2;
	img2.LoadFromFile( "assets/texture/test/checkerboard.png" );

	glGenTextures( 1, &teapot_texture_id );
	glBindTexture( GL_TEXTURE_2D, teapot_texture_id );

	p_pixels = img2.GetPixelsPtr();
	glTexImage2D(	GL_TEXTURE_2D, 
					0, 
					GL_RGBA, 
					img2.GetWidth(),
					img2.GetHeight(),
					0,
					GL_RGBA, 
					GL_UNSIGNED_BYTE,
					p_pixels );
	
	
	/// Plane Texture
	sf::Image img3;
	img3.LoadFromFile( "assets/texture/test/Plane_XZ.png" );

	glGenTextures( 1, &plane_texture_id );
	glBindTexture( GL_TEXTURE_2D, plane_texture_id );

	p_pixels = img3.GetPixelsPtr();
	glTexImage2D(	GL_TEXTURE_2D, 
					0, 
					GL_RGBA, 
					img3.GetWidth(),
					img3.GetHeight(),
					0,
					GL_RGBA, 
					GL_UNSIGNED_BYTE,
					p_pixels );

	/// BG Texture
	sf::Image img4;
	img4.LoadFromFile( "assets/texture/test/StarryBackground.png" );

	glGenTextures( 1, &bg_texture_id );
	glBindTexture( GL_TEXTURE_2D, bg_texture_id );

	p_pixels = img4.GetPixelsPtr();
	glTexImage2D(	GL_TEXTURE_2D, 
					0, 
					GL_RGBA, 
					img4.GetWidth(),
					img4.GetHeight(),
					0,
					GL_RGBA, 
					GL_UNSIGNED_BYTE,
					p_pixels );

	return true;
}
And this is the RenderScene() function which is called each pass. For the NoBackground screenshot, I simply commented out the code section labelled "///BG Texture". The glutSolidTeapot function is copy-pasted from GLUT, so GLUT is not actually being used in my code. If you need to reference it, let me know and I'll add it to my post.
code:
void RenderScene()
{
	// Set the active window before using OpenGL commands
    // It's useless here because active window is always the same,
    // but don't forget it if you use multiple windows or controls
    gp_app->SetActive();

	// Clear color and depth buffer
	glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

	/// Set Rendering Options
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

	/// Switch to Model View mode
	glMatrixMode( GL_MODELVIEW );
        glLoadIdentity();

	/// Apply some transformations
	glTranslatef( 0.0f, 0.0f, -20.0f );
	glTranslatef( -cam_pos.x, -cam_pos.y, -cam_pos.z );

	/// Plane Texture
	glBindTexture( GL_TEXTURE_2D, plane_texture_id );
    glBegin(GL_QUADS);
		glTexCoord2f( 0.0f, 0.0f );
		glVertex3f( -10.f, 0.0f, -10.0f );
		glTexCoord2f( 1.0f, 0.0f );
		glVertex3f( 10.0f,  0.0f, -10.0f);
		glTexCoord2f( 1.0f, 1.0f );
		glVertex3f( 10.f, 0.0f, 10.0f );
		glTexCoord2f( 0.0f, 1.0f );
		glVertex3f( -10.f, 0.0f, 10.0f );
	glEnd();

	/// Teapot
	glBindTexture( GL_TEXTURE_2D, teapot_texture_id );
	glFrontFace( GL_CW );
	glutSolidTeapot( 10.0f );
	glFrontFace( GL_CCW );

	/// BG Texture
	glBindTexture( GL_TEXTURE_2D, bg_texture_id );
        glBegin(GL_QUADS);
		glTexCoord2f( 0.0f, 0.0f );
		glVertex3f( -30.0f,  30.0f, -10.0f );
		glTexCoord2f( 1.0f, 0.0f );
		glVertex3f( 30.0f,  30.0f, -10.0f);
		glTexCoord2f( 1.0f, 1.0f );
		glVertex3f( 30.0f, -30.0f, -10.0f );
		glTexCoord2f( 0.0f, 1.0f );
		glVertex3f( -30.0f, -30.0f, -10.0f );
	glEnd();


	// Finally, display rendered frame on screen
    gp_app->Display();
}
I'm quite certain the issue is something fairly trivial, but I haven't had any luck searching via Google and on these forums. If you need any additional code, please feel free to ask, but I feel most of the non-GL code should be fairly intuitive. Many thanks for your time and assistance!

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I'm not seeing anything that sticks out at me. I think your vertex winding on the background and plane might be backwards? But I don't think that would cause your problem.

gDEBugger is a pretty cool free OpenGL debug tool that might be able to help you figure out this, and future, problems, though. There's newer and older versions (pre and post buyout by AMD). The new version's UI is kind of a mess and I'm not doing anything with super new OpenGL features, so I tend to use the old version. They also have slightly different feature sets, but they should both let you step through the drawing process, see the loaded textures and point out any OpenGL errors as they happen.

UraniumAnchor
May 21, 2006

Not a walrus.
Not to rag on you or anything but I really do wonder when we'll finally reach the point that people stop using pre-shader OGL for anything.

I still have to deal with ES 1.x at work. :sigh:

shodanjr_gr
Nov 20, 2007

quote:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

I am pretty sure glTexParameter() calls are applied PER TEXTURE and not globally. That means that the current active texture (for the last call to glBindTexture) will have its state affect by these. Also, I believe that you need to provide minification/magnification filters for an OpenGL texture to be valid.

What ends up happening is, if you comment out the background call (including the glBindTexture), the active texture at the end of render() will be your teapot_texture_id. During the next render, the glTexParameteri calls get applied to teapot_texture_id and the texture becomes valid for usage. Then you switch to the plane texture (which is not properly set up, hence your ground plane being screwed up) and then back to the teapot texture and your teapot renders fine.

Try applying these texture sampler settings to EACH texture, during initialization. You generally do not want to be reapplying them on runtime, unless you have to change their state.

RocketDarkness
Jun 3, 2008

shodanjr_gr posted:

I am pretty sure glTexParameter() calls are applied PER TEXTURE and not globally. That means that the current active texture (for the last call to glBindTexture) will have its state affect by these. Also, I believe that you need to provide minification/magnification filters for an OpenGL texture to be valid.

What ends up happening is, if you comment out the background call (including the glBindTexture), the active texture at the end of render() will be your teapot_texture_id. During the next render, the glTexParameteri calls get applied to teapot_texture_id and the texture becomes valid for usage. Then you switch to the plane texture (which is not properly set up, hence your ground plane being screwed up) and then back to the teapot texture and your teapot renders fine.

Try applying these texture sampler settings to EACH texture, during initialization. You generally do not want to be reapplying them on runtime, unless you have to change their state.

Dang, you hit the nail on the head. Tossing those lines of code in after each BindTexture call fixed it right up. I really appreciate it! And thanks to everyone else that spent any time looking, as well.

UraniumAnchor posted:

Not to rag on you or anything but I really do wonder when we'll finally reach the point that people stop using pre-shader OGL for anything.

I'm starting with the basics, since I really like to understand everything that's going on and it also is meant to be an opportunity to brush up on and refine my 3D math skills to a fine sheen. But I'd love to read about the differences, if you can point me to any websites or blogs that explain such. I'm not familiar with the term.

ZombieApostate posted:

I'm not seeing anything that sticks out at me. I think your vertex winding on the background and plane might be backwards? But I don't think that would cause your problem.

gDEBugger is a pretty cool free OpenGL debug tool that might be able to help you figure out this, and future, problems, though. There's newer and older versions (pre and post buyout by AMD). The new version's UI is kind of a mess and I'm not doing anything with super new OpenGL features, so I tend to use the old version. They also have slightly different feature sets, but they should both let you step through the drawing process, see the loaded textures and point out any OpenGL errors as they happen.

Thanks for the links! I'll check that tool out. Good point about the inverted vertex winding, though you were correct that it wasn't the source of the problem. Is there any reason to go with CCW (the OGL) default over CW? Would it be because most modeling programs tend to save their data in the CCW formation or something of that manner?

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
OpenGL is traditionally a right handed coordinate system, which fits with CCW winding. Basically all the tools and sample code I've ever seen assume the same. You can change it to a left handed system and/or CW if you really wanted to, but it seems like a lot of hassle for no gain.

RocketDarkness
Jun 3, 2008
In hindsight, that makes perfect sense. Never thought about correlating the vertices with the fact that the coordinate system was right-handed 'til now. Thanks again! No point going against the grain for no reason.

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I ask a lot of questions, so it's nice to be helpful when I can :v:

shodanjr_gr
Nov 20, 2007

RocketDarkness posted:

Dang, you hit the nail on the head. Tossing those lines of code in after each BindTexture call fixed it right up. I really appreciate it! And thanks to everyone else that spent any time looking, as well.

No problem :).

Just to reiterate, there isn't much point in setting the texture filtering modes every time you render (and it probably hurts performance wise). Just do it once when you initialize the textures and then just switch them when you really need to. Also, use GL_LINEAR :).

Adbot
ADBOT LOVES YOU

UraniumAnchor
May 21, 2006

Not a walrus.

RocketDarkness posted:

I'm starting with the basics, since I really like to understand everything that's going on and it also is meant to be an opportunity to brush up on and refine my 3D math skills to a fine sheen. But I'd love to read about the differences, if you can point me to any websites or blogs that explain such. I'm not familiar with the term.

This is the first one that comes to mind, but even that's a bit old now. I remember finding another one a while back that was more up to date, but I seem to have lost the link to it.

Edit: This is another one, covering 3.x/4.0, but it hasn't been updated in a while. Still better than using fixed functionality unless you're targetting ES 1.x devices.

UraniumAnchor fucked around with this message at 19:42 on Sep 6, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply