Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'm puttering around in XNA as an intro to 3D graphics and I'm getting a really strange error - adding text sprites seems to be messing up my 3D images.

The picture on the left below is what I want - some axes and a red triangle sitting on a ground mesh. When I try to put in some text the letters show up just fine but the red triangle 'moves' below ground. It isn't actually changing location, the ground plane is drawing on top of it.



The code below is the main draw loop, the lines that cause the problem are commented out. The base.Draw call draws the 3D stuff, the spriteBatch calls do the text.

code:
        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.Black);
             base.Draw(gameTime);

             //GraphicsDevice.RenderState.FillMode = FillMode.Solid;
             //spriteBatch.Begin();
             //spriteBatch.DrawString(font, "ABC123", new Vector2(10, 10), Color.Yellow, 0,
             //    Vector2.Zero, 1, SpriteEffects.None, 0);
             //spriteBatch.End();
        }
The problem still happens if I strip out everything but spriteBatch.Begin() and spriteBatch.End() and don't draw any text, so something there is changing the state of the pipeline.

Any suggestions?

edit: On further research it looks like going into sprite mode sets a bunch of flags that don't get reset when returning to 3D mode on the next frame update. There is a writeup here with more details. Adding the following lines after spriteBatch.End() fixes the problem.
code:
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.AlphaBlendEnable = false;
GraphicsDevice.RenderState.AlphaTestEnable = false;
vvv: Yeah, it looks like the depth buffer is disabled when drawing sprites, but not re-enabled when going back to 3D.

PDP-1 fucked around with this message at 04:16 on Nov 16, 2009

Adbot
ADBOT LOVES YOU

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Check out 'Introduction to 3D Game Programming with DirectX 10' by Frank Luna.

He does a pretty good job at explaining things in a logical sequence, and unlike most books with 'Game Programming' in the title he doesn't pull too many mathematical punches and provides the actual formulas for doing 3D operations.

My only real complaint about this book is that the author tends to introduce and discuss tiny code snippets one by one, so you need to download the examples off his website to get a big picture view of how everything works together.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.


I'm trying to make a set of XYZ axes that float in the corner of the viewing area to provide some sense of direction while navigating around a scene. In the image above the large set of axes is an object in world space that is sitting at the origin and the colored lines are unit vectors with RGB <=> XYZ. The little set of axes in the lower-left corner is a similar object, but scaled and translated so that it is always sitting in front of the camera's near plane.

The problem I'm running into is that the little axes aren't centered in the middle of the view space so the projection transform is causing them to be drawn with a slight tilt. Ideally, the big and little axes in the image above would have an identical orientation.

Is there any easy way to get rid of this unwanted projection tilt? I suppose I could render the little axes to their own texture and then draw it as a sprite on top of the rest of the image, but it just feels like there's a simpler solution that I'm missing.

PDP-1 fucked around with this message at 19:00 on Apr 17, 2010

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
That worked brilliantly. Thank you.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.

Whilst farting I posted:

So if tangents are explicitly defined, then how exactly is the curve drawn?
The simple answer is to let s range over [0,1] in however many steps you want to use and then calculate:

P(s)=P1*h1(s) + P2*h2(s) + T1*h3(s) + T2*h4(s)

for each value of s. Or in pseudocode:

code:
int i;
int iMax=100;
float[] P = new float[iMax];

for (i=0, i<iMax, ++i)
{
   s = (float)i/iMax;
   h1 = 2s^3 - 3s^2 + 1;
   h2 = -2s^3 + 3s^2;
   h3 = s^3 - 2s^2 + s;
   h4 = s^3 -  s^2;
   P[i] = P1*h1 + P2*h2 + T1*h3 + T2*h4;
}
Just think of the functions h1, h2, h3, h4 as a set of weighting parameters that are matched to P1, P2, T1, T2 respectively. You would expect that at s=0 the weighting of P1 would be high and P2 would be zero and if you look at the h1, h2 functions you'll see that is the case. As s goes from 0 -> 1 the weight of P1 drops and the weight of P2 increases in a smooth way.

The tangent weighting functions h3 and h4 are a little bit more obtuse, but make sense if you look at the special case of a small but non-zero value of s. To first order approximation the weighting functions become

h1 = 1
h2 = 0
h3 = s
h4 = 0

And the point is calculated approximately as

P(s) = P1 + T1*s

which is essentially the first-order Taylor series of your function around s=0. There is an analogous Taylor expansion around s=1 using P2, T2.

So basically the whole gimmick here that you have a first-order Taylor series approximations of your function at s=0 and s=1 and you are using the weighting factors h* to interpolate between them.

PDP-1 fucked around with this message at 15:27 on Apr 26, 2010

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Are there any tips or tricks for debugging shader files?

I'm using DirectX/HLSL and finding writing custom shaders to be very frustrating since it isn't possible to step through the code like you can in a normal IDE and there's no way to print values to a debug file.

Are there any kind of emulators that will take a .fx file as input, let you specify the extern values and vertex declaration, and walk through the code one line at a time?

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Thanks for all the replies on shader debuggers. I downloaded RenderMonkey and it helped get through my original problem so I now have a working texture/lighting model to base further work on. I'll take a look at PIX but I'm using an XP machine constrained to DX9 so it'd be the flaky version instead of the nicer DX10+ versions.

edit: had a question here, figured it out on my own.

PDP-1 fucked around with this message at 11:41 on May 12, 2010

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
The w component of your vector should pretty much always be 1. Points are translated via a 4x4 matrix with the amount of the translation stored in the 4th column (or row, depending on how you set up your system). If you use a number other than 1 to pad out your matrix you are introducing an extra scaling into these translation matrix components, resulting in your points not ending up where you want them.

Even if you aren't trying to move the points around directly with a translation matrix the view and projection matricies will be affected by having something other than 1 in the last position of your vector, and you'll still get screwed up vertex positions.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
This shows how to do fire with a particle engine, along with the code if you are familiar with C#/XNA.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'd never heard of the discard statement for HLSL, so I looked it up and it does exist. It sounds like its the equivalent of what Screeb described for GLSL.

From The Complete Effect and HLSL Guide:
discard: This keyword is used within a fragment shader to cancel rendering of the current pixel.
example: if(alphalevel < alpha_test) discard;

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
What values are you using for your near plane and far plane distances? If you have a really huge range you might be losing depth resolution which could cause the issue with the back squares jumping around. It might also cause z-fighting between adjacent edges and give you the jaggy lines, but I can't explain why that would matter when moving from left to right.

You could just try setting the z-planes to some smallish range like near=1, far=100 and see if it changes anything.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I'm having a problem with textures scaling shittily. I start with a texture that looks like this



and as I zoom in and out it converts between looking OK and having poor sampling on the seams between tiles:



I assume this is caused by the texture sampler having problems detecting the one pixel of dark border on some of the edges. Are there any tricks to get around this problem?


e: VVV I will give that a shot, thanks!

PDP-1 fucked around with this message at 23:42 on Oct 16, 2010

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.

Nippashish posted:

I've implemented the marching cubes algorithm and I'm getting some strange behavior that I'm hoping someone here can help me explain. My code produces results that look correct so long as I make the magnitude of the isosurface I'm building big enough, if it's too small I get a surface full of really nasty holes.

Here's an example, this image is the 5000 level set of 1000*(sqrt(r2) + cos((x + y)/2.0) + cos((y + z)/2.0)):


But here's the 5 level set of sqrt(r2) + cos((x + y)/2.0) + cos((y + z)/2.0) over the same domain:


In case it's not obvious from the second picture, there are holes EVERYWHERE:


This happens regardless of the level of detail, or the range of the coordinate axes. As far as I can tell the only factor that affects it is the magnitude of the function I'm grabbing a level set from. Does anyone know what's going on here?

I'd take a closer look at your interpolating functions. In your first image there seems to be a kind of stepping or terracing feature which really shouldn't be there since you have way more samples than you need to get a smooth isosurface.In your second image the terracing is worse, and regions with a high rate of change are starting to show holes. In the closeup image it looks like there are triangles being generated where the holes appear, but the verticies are in the wrong locations. These things would all make sense if the interpolating function was slightly wrong.

Maybe try simplifying your function to a sphere and then adjust the radius of the sphere to see if holes appear at high curvature/small radius.

e: paulbourke.net gives some sample interpolation code if you want to compare it against your own. If you were calculating the slope (mu) wrong, say as mu = 1/(valp2 - valp1), you would get a situation where 'large' functions [ 1 << (valp2-valp1) ] produced a connected but step-ish surface like what you are seeing since mu would go to zero and each point vertex would essentially snap to the nearest grid point. Then when you get to 'small' functions [ 1 ~= (valp2 - valp1) ] the snapping effect goes away and you get a mish-mash of incorrect vertex locations.
code:
/*
   Linearly interpolate the position where an isosurface cuts
   an edge between two vertices, each with their own scalar value
*/
XYZ VertexInterp(isolevel,p1,p2,valp1,valp2)
double isolevel;
XYZ p1,p2;
double valp1,valp2;
{
   double mu;
   XYZ p;

   if (ABS(isolevel-valp1) < 0.00001)
      return(p1);
   if (ABS(isolevel-valp2) < 0.00001)
      return(p2);
   if (ABS(valp1-valp2) < 0.00001)
      return(p1);
   mu = (isolevel - valp1) / (valp2 - valp1);
   p.x = p1.x + mu * (p2.x - p1.x);
   p.y = p1.y + mu * (p2.y - p1.y);
   p.z = p1.z + mu * (p2.z - p1.z);

   return(p);
}

PDP-1 fucked around with this message at 22:57 on Apr 11, 2011

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Are there any known good algorithms for taking a list of randomly oriented triangles and joining them together into a mesh with minimal vertex and index lists?

I have a marching cubes algorithm that spits out a bunch of individual triangles. I'd like to take that output and form a mesh with vertex normals, texture coordinates, etc. I can imagine ways to do this but it seems like it's a common enough problem that someone else has probably solved it better than I could.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I've got an odd problem where the bottom 1/10th of the screen or so seems to lag behind the rest of the scene when the camera rotates. It doesn't show up in screen shots, but looks like this simulated pic in the live program if the camera was rotating counter-clockwise:



Any ideas? This is DX9 via XNA if that matters. Everything looks fine the second the camera is stopped.

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I ran into something I don't understand today while working on a shader - I was sampling a mipmapped texture and the shader ran fine. Then I changed the texture and forgot to generate the mipmap and the framerate absolutely tanked. When I generated the mipmap on the new texture things ran great again.

The obvious conclusion is to be sure to use mipmaps, but I don't understand why that makes such a difference in the framerate. Sampling a texture is sampling a texture, and if anything I'd have guessed that translating the UV coords to the mipmap would more work for the GPU.

Why is sampling a mipmapped texture so much faster than sampling a non-mipmapped texture?

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
I doubt that the driver is generating anything on-demand since the un-mipped texture turns into visual noise at long draw distances. This was my clue to look at the mipmap status to begin with, but also suggests that the full size texture is being used directly if lower detail levels aren't available.

The cache/data locality issue seems like it could be the cause. I have a shitton of data set in vertex buffers so it is likely that a lot of cache swapping is going on in general and loading the full 512x512 texture + bumpmap would take some time.

Thanks for the help.

Adbot
ADBOT LOVES YOU

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.
Can anyone suggest a good book covering modern-ish (shader-based) OpenGL programming? It looks like things have changed quite a lot between the release of 3.0 and now so I'm a bit hesitant to just blindly grab the top seller off Amazon.

Thanks!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply