Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Entheogen
Aug 30, 2004

by Fragmaster
I looked through this forum and could not find a graphics specific thread dedicated to questions. So let us use this one to ask any sort of 3D graphics questions that we might have.

To start it off, I would like to ask how would i go about querying GPU memory in OpenGL? I would also like to measure sizes of display lists once i generate them. The reason why I want to do this, is because it seems display lists, when compiled, do not cache repeated vertices, the way vertex arrays / VBO's can. However, they do seem to run faster with large geometric objects than VBOs.

Is there also a way to minimize the size of display list that generates same geometric object, by somehow not repeating same vertices, the way you can do using vertex arrays?

Also another question, does anybody have any success using Geometry shader in OGL? I believe NVidia made some extension for them, but I am not quite sure how to use them. Does it still use GLSL?

vvvvvvvvvvvvvvv
no, man, im cool, i got them working. It wasn't hard after getting my program to work with vertex arrays. If you have any interesting tricks to share about using them, then I would like to see it. Thanks anyway.

Entheogen fucked around with this message at 22:22 on Jul 8, 2008

Adbot
ADBOT LOVES YOU

Thug Bonnet
Sep 22, 2004

anime tits ftw

I got hammered at work and I didn't get a chance to email you the VBO examples yet. Do you still need them? I thought I saw in the gamedev thread that you had got it working.

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

haveblue
Aug 15, 2005



Toilet Rascal

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

No, not really. It will be deprecated whenever they get around to releasing OpenGL 3.0.

heeen
May 14, 2005

CAT NEVER STOPS

Entheogen posted:

Also another question, does anybody have any success using Geometry shader in OGL? I believe NVidia made some extension for them, but I am not quite sure how to use them. Does it still use GLSL?

Yes and Yes. They're quite simple to use. Here's something I googled for you:
http://cirl.missouri.edu/gpu/glsl_lessons/glsl_geometry_shader/index.html

stramit
Dec 9, 2004
Ask me about making games instead of gains.

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

Real answer: No there is no reason too. You can just wrap any immidiate call into a VBO / dynamic VBO and do it that way.

Practical answer: I'm lazy and draw full screen quads in immidiate mode all the time :S. It's only 4 verts, it's not that slow right...

Entheogen
Aug 30, 2004

by Fragmaster

Thug Bonnet posted:

I got hammered at work and I didn't get a chance to email you the VBO examples yet. Do you still need them? I thought I saw in the gamedev thread that you had got it working.

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?

I think that if you wanted to make a program that was not real-time but rendered complex images to files, then you could use immediate mode because speed is not much of a concern. I think if you used OpenGL that way then it would still be faster than using software renderer. ALthough you would be moving poo poo-ton of data between CPU and GPU.

Entheogen
Aug 30, 2004

by Fragmaster
is there a way to compile display list, but store it as perhaps a string in client's memory and then move it at your own discretion to GPU when you actually need to draw it?

What I would like to do is generate a lot of huge display lists that in no way would fit at once in my GPU memory, but I only need to render one of them at a time.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Thug Bonnet posted:

My question: Is there any reason to ever use immediate mode in OpenGL for anything but toy programs?
It's somewhat nice for doing trivial dynamic geometry like particles and beam sprites.

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.

Entheogen posted:

What I would like to do is generate a lot of huge display lists that in no way would fit at once in my GPU memory, but I only need to render one of them at a time.
What are these huge display lists going to be used for?

Entheogen
Aug 30, 2004

by Fragmaster

StickGuy posted:

What are these huge display lists going to be used for?

Its for the same project that I posted screen shots on page 6 of post your stuff thread. It is for volume visualization. Right now I generate a display list of subdivided volume and draw that. I would like to have an array of volumes that the user can switch between, but only one volume will ever be needed to be displayed at one time. I think that by pre-generating all display lists and then just loading and offloading them from video card I can cut down on the time it takes to switch from one volume to the other.

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.
I'm still puzzled by your thousands of triangles approach. In any case, I don't know if you'd really gain a lot of speed by caching the display lists in main memory over recompiling them as necessary.

EDIT: vvv I'll put together a simple demo.

StickGuy fucked around with this message at 09:56 on Jul 24, 2008

Entheogen
Aug 30, 2004

by Fragmaster

StickGuy posted:

I'm still puzzled by your thousands of triangles approach. In any case, I don't know if you'd really gain a lot of speed by caching the display lists in main memory over recompiling them as necessary.

It works pretty well for me so far. I just normalize scalar field to be between 0 and 1 and use that as alpha values for vertices I create. I use some simple normal distribution and linear shaders to tweak how they come out too. I also made a simple recursive subdivision function that divides cube into 8 subcubes if it detects that it is not uniform. That greatly boosted my FPS and memory requirement on the GPU.

I will try raycasting next, but that looks a lot harder than what I am doing now. What would you suggest other than raycasting that I could also give a try?

Entheogen fucked around with this message at 09:45 on Jul 24, 2008

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
After about a day of trying to get skinned meshes in HLSL working I learned about PIX and discovered that you do in fact need to set vertex declarations (heh). This is what I have so far:



That's supposed to be a man-shaped figure, so something's off, and I assume it's my bone transforms. They should be matrices that transform the bone from bone space to world space, correct?

edit: reading this paper shows me this equation:


I'm assuming that the first matrix is one that transforms a bone from bind-pose to world, but I don't know what the second matrix represents.

MasterSlowPoke fucked around with this message at 13:03 on Jul 24, 2008

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.

Entheogen posted:

I will try raycasting next, but that looks a lot harder than what I am doing now. What would you suggest other than raycasting that I could also give a try?
I've put together a demo program illustrating the slice-based technique I was telling you about in the other threads. It's not the best approach and there's a number of ways to improve it, but it doesn't require drawing thousands and thousands of triangles. You'll need GLUT, GLEW and your favorite C++ compiler to get it to work.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

That's supposed to be a man-shaped figure, so something's off, and I assume it's my bone transforms. They should be matrices that transform the bone from bone space to world space, correct?
There are generally two ways to do weighted skeletal animation: Pretransformed weight lists and matrix palette.

Doom 3's MD5 format uses the first. With pretransformed weight lists, each vertex references only a list of weights, and the weights contain coordinates in bone space premultiplied by the influence. The final result is achieved by transforming those values from bone space into world space and summing them.

Matrix palette instead has a base (a.k.a. reference) model in world space, and a base pose. The transformed position for each weight is calculated by transforming the vertex in the base pose into bone space using the bone's base pose (or rather, the inverse of it), then transforming it into world space based on its current pose. The results of that are then multiplied by the influence and summed to form the result.


What you're looking at is the latter. The second matrix is the inverse of the base pose.


In implementation, you can concatenate the two matrices into one matrix, a matrix which will transform a base pose vertex into its new location with a single matrix multiply operation rather than two.

If you're going to do this in software (as opposed to in the vertex shader), you can pre-blend the matrices for each unique weight list since the number of unique weight combinations/values is generally fairly low, which lets you avoid branch mispredictions in the transform code.

OneEightHundred fucked around with this message at 21:19 on Jul 24, 2008

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
Actually, that is an MD5 model, but I am computing a base pose and base model at load time as I've never seen any other description on how to do it. I'm also want to preform this on the graphics card, not the cpu. I've actually done it on the cpu before and for some reason it worked a lot easier there.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

Actually, that is an MD5 model, but I am computing a base pose and base model at load time as I've never seen any other description on how to do it.
If you're going to do transformation on the GPU, then it's definitely the better way since the alternative is using up all your texcoord/attrib streams to save a handful of vector multiplies.

Getting matrix palette skinning to work basically involves two steps: Getting it to work right in the base pose, then getting the animation to work right.

The reason for the first is that it's really easy to tell when the base pose is correct: Your transform matrices should all be identity matrices (or will be only slightly off due to floating point error). You get the transform matrix by computing (current * inverse(base)). Obviously, when current is base, you should be getting an identity matrix, meaning none of the vertices will get changed at all.

If you can't get them to be identity matrices, troubleshoot the computation I just mentioned. Make sure they're in world space and not bone space. Make sure you inverted the base poses after concatenating them to bring them into world space, not before.

If you can get the transform matrices to be identity and the result is still hosed up, then it means the vertices in your base model were calculated wrong.

If you can get it to that point, and it animates incorrectly, then your transform matrix is probably inverted, or the transform operation in your shader is backwards. The other possibility is that your base AND animated poses were both in bone space when you calculated the transform matrices, so make sure you have a way of debugging what the SKELETON looks like as well.

OneEightHundred fucked around with this message at 02:14 on Jul 25, 2008

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
After a lot of teaking, I've got it pretty much working! My only hangup? It's flipped!


In engine, In editor

I've been messing around but I can't seem to get it to flip back around.

edit: Im at my wit's end with this, I can't get the model to flip back, the only way to get it to display right it to flip everything in the editor.

MasterSlowPoke fucked around with this message at 23:04 on Jul 25, 2008

midnite
May 10, 2003

It's all in the wrist
If you want to learn ALL about animation, inverse kinematics, locomotion/gait, etc. than I heavily recommend looking at this page:

http://graphics.ucsd.edu/courses/cse169_w04/

It's a UCSD course webpage for CSE169. There are PDFs and PowerPoint slides there. Grab the PowerPoint slides and just start reading through them. They are easy reads, but read like a book (unlike a lot of PPT presentations where you need someone talking to explain what the slide means). You will learn a great deal about animation, blending, etc. I lucked out finding it with Google sometime ago.

As a bonus, there are Calculus, Linear Algebra and Quaternion reviews in the slides if you are rusty (or had never learned), just to make sure you have the math you need also.

midnite
May 10, 2003

It's all in the wrist

MasterSlowPoke posted:

edit: Im at my wit's end with this, I can't get the model to flip back, the only way to get it to display right it to flip everything in the editor.

Does the editor work in the same handedness and coordinate system you are? You might have to do some conversion on export to convert from right-handed to left-handed or vice-versa. Does the positive X, Y and Z axis go in the same directions are in your engine? If not you'll have to flip the necessary coordinates in your verts, normals and rotations.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Easy way to tell is to translate the model along each axis and find out if it's going the same direction as it is in the editor. If it is, your projection matrix is flipped. If it isn't, your model was loaded flipped.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through


Yeah, it was definitely the projection that caused the error.

midnite posted:

Does the editor work in the same handedness and coordinate system you are? You might have to do some conversion on export to convert from right-handed to left-handed or vice-versa. Does the positive X, Y and Z axis go in the same directions are in your engine? If not you'll have to flip the necessary coordinates in your verts, normals and rotations.

I'm not sure what handedness Blender uses, but I guess Doom 3 uses a left handed system as that was what the MD5 exporter used. Either that, or when I changed the model componets from z-up to y-up that also changed the handedness. Either way I finally figured out where to transform the model with a -1 scale on the X axis: in the middle of the bone-to-world transform.

Shazzner
Feb 9, 2004

HAPPY GAMES ONLY

When is OpenGL 3.0 going to be released and will it be the next big thing?

Entheogen
Aug 30, 2004

by Fragmaster

StickGuy posted:

I've put together a demo program illustrating the slice-based technique I was telling you about in the other threads. It's not the best approach and there's a number of ways to improve it, but it doesn't require drawing thousands and thousands of triangles. You'll need GLUT, GLEW and your favorite C++ compiler to get it to work.

So it doesn't draw view oriented quads. They remain constant in their position, but the 3d texture coordinates is what changes based on view?

How exactly does 3d texturing work. If I define 3d texture for a flat polygon, then that polygon sort of slices through that 3d texture and gets a 2d slice of it and maps it on polygon? Is that kind of how this works?

You do some matrix and vector calculations in client side. Do you think there is an easy way to make that happen in vertex shader? that is compute 3d tex coordinates there? Could I just compute eye vector and use that to generate tex coords?

Also is it possible to talk to you on AIM or ICQ?

Shazzner posted:

When is OpenGL 3.0 going to be released and will it be the next big thing?

Probably not. As far as I understand all of the new cool technologies like geometry shaders are already supported in OpenGL via extensions. OpenGL3.0 will just make them standard, which I guess will make using those new technologies easier for everybody. Also game makers will probably stick with DX10.

Entheogen fucked around with this message at 15:52 on Jul 27, 2008

haveblue
Aug 15, 2005



Toilet Rascal

Entheogen posted:

How exactly does 3d texturing work. If I define 3d texture for a flat polygon, then that polygon sort of slices through that 3d texture and gets a 2d slice of it and maps it on polygon? Is that kind of how this works?

Yep. You assign each vertex 3 texture coordinates instead of 2, and it interpolates through a 3D texture space instead of a 2D plane. There are no limits on these coordinates, they don't have to form axis-aligned planes or anything, but you should avoid bent polygons in texture space just like in world space.

GL 3.0 will also have a major overhaul of the API, dropping the fixed-function pipeline and bringing in OOP, so it should at least be easier to use than the current system.

haveblue fucked around with this message at 15:24 on Jul 27, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shazzner posted:

When is OpenGL 3.0 going to be released and will it be the next big thing?
Nobody knows, supposedly later this year.

Will it be the next big thing? No, because of the previous answer! The problem with OpenGL ever since D3D7 has been that they take too long to get features through. D3D went through 2 major versions before OpenGL finally got a good render-to-texture extension. The spec for OpenGL 3 was supposed to be out in September LAST YEAR. Developers can not adopt a graphics API that does not exist, and will not adopt a graphics API that continues to lag on features without providing a clear benefit. The Xbox 360 has made the portability argument less clear-cut, and the usability arguments have been fairly moot since D3D9 came out with a sane API.

quote:

OpenGL3.0 will just make them standard
The main differences are that OpenGL 3 eliminates some of the legacy cruft that hurts OpenGL 1-2's performance. OpenGL has been unable to eliminate a lot of design decisions that, in retrospect, were not very good, due to legacy support.

One example would be that texture dimensions are mutable, and replacing a texture mandates rescaling all of the mipmap levels even though they're likely to be overwritten immediately. In OpenGL 3, texture dimensions are immutable.

It also has asynchronous object creation, another performance boost.

As for what game makers will do, it depends heavily on how accepted Vista and D3D10 are. OpenGL 3 is probably going to be XP compatible, and will offer support for D3D9 and D3D10 hardware with one API. If D3D10 and Vista aren't widespread, it would provide a very compelling reason to use it over D3D.

OneEightHundred fucked around with this message at 00:55 on Jul 28, 2008

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

OneEightHundred posted:

The main differences are that OpenGL 3 eliminates some of the legacy cruft that hurts OpenGL 1-2's performance. OpenGL has been unable to eliminate a lot of design decisions that, in retrospect, were not very good, due to legacy support.
Real OGL3 != the OGL3 that was discussed until last year's SIGGRAPH. I assume they'll actually explain WTF happened at this year's SIGGRAPH (so a few weeks), but from what I've heard it's a far less radical change than what they announced (supposedly mobile people were unhappy that it would not look very much like OpenGL ES, and considering that's the only area where they've had a real lock on the market, back to the drawing board they went).

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.

Entheogen posted:

So it doesn't draw view oriented quads. They remain constant in their position, but the 3d texture coordinates is what changes based on view?

How exactly does 3d texturing work. If I define 3d texture for a flat polygon, then that polygon sort of slices through that 3d texture and gets a 2d slice of it and maps it on polygon? Is that kind of how this works?

You do some matrix and vector calculations in client side. Do you think there is an easy way to make that happen in vertex shader? that is compute 3d tex coordinates there? Could I just compute eye vector and use that to generate tex coords?

Also is it possible to talk to you on AIM or ICQ?
The idea is that the quads are always perpendicular to the eye vector (essentially a billboard). The quads slice through the volume so each pixel on a quad gets a tri-linearly interpolated texture value. The tricky thing about this approach is that you must compute a matrix that maps points on the quads into points in the volume space. Once you have this matrix, you can compute the actual texture coordinates in a shader. I put my AIM name in my profile so we can discuss it more there.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Professor Science posted:

Real OGL3 != the OGL3 that was discussed until last year's SIGGRAPH. I assume they'll actually explain WTF happened at this year's SIGGRAPH (so a few weeks), but from what I've heard it's a far less radical change than what they announced (supposedly mobile people were unhappy that it would not look very much like OpenGL ES, and considering that's the only area where they've had a real lock on the market, back to the drawing board they went).
They've got a lock on the professional graphics market too. If the API was really that much different, I can see the concern, but most of it was taking out the trash that I can't imagine would harm mobile. I thought most of the concerns were over technical issues, like mandating S3TC, and numerous fine points about the spec.

OpenGL ES isn't really just a mobile thing either, most of the current generation of game consoles use graphics libraries derived from it.

quote:

How exactly does 3d texturing work.
The same way 2D texturing works, except instead of using a 2D texture coordinate that retrieves a value from a 2D texture map, it's using a 3D texture coordinte that retrieves a value from a 3D texture map. Where the polygon is in space has nothing to do with it, you supply texture coordinates for each vertex.

Entheogen
Aug 30, 2004

by Fragmaster

StickGuy posted:

The idea is that the quads are always perpendicular to the eye vector (essentially a billboard). The quads slice through the volume so each pixel on a quad gets a tri-linearly interpolated texture value. The tricky thing about this approach is that you must compute a matrix that maps points on the quads into points in the volume space. Once you have this matrix, you can compute the actual texture coordinates in a shader. I put my AIM name in my profile so we can discuss it more there.

Dear StickGuy, I finally got this working in my own project. I didn't at first because I was stupidly moving my quads, but then I read through your code again and realized you were not rotating them, but just the texture coordinates. I think from now on I can figure out how to make rotation matrix and just send the angles and translation offset to shader and let it construct the inverse rotation/translation matrix and then multiply the tex coordinates.

What is your email? Send it to me over aim, and I will include it in my source code to credit you for your help. I did not copy your code, since I use java, but it definitely helped me out a lot. Thank you so much for your help.

Entheogen
Aug 30, 2004

by Fragmaster
I have successfully implemented the 3d slicing technique, however it appears to be rather slow. The way I calculate the inverse matrix also doesn't appear to be much of a factor here, as I only do it once, and then send it to vertex shader which actually multiplies it to tex coordinates. I think the limit here is imposed by my cards texture fill rate and the blending that it has to do between all slices. Also there appear to be some artifacts due to slicing. They appear as these lines that criss cross the volume.

I was wondering, however, do you think if I combine this with a technique I was doing earlier could help both the quality and the speed? What I am thinking about, is to generate space filling cubes again, but this time instead of giving them colors I could give them 3d texture coordinates. I am not sure how much faster it could be than what I am doing now, but it could possibly increase the visual quality. I will try this and report back.

Here is the screen shot:


There are 1000 slices here, and I am using my gaussian filter do isolate a certain data range as well as provide false coloring. You can see the artifact lines criss crossing the volume thou.

ok. it seems to be that these artifact lines only appear in gaussian shader, and not linear one. Here is source code for my linear fragment shader:

code:
uniform float scale_factor;
uniform sampler3D tex3D;

void main(void)
{
   float v = texture3D(tex3D, gl_TexCoord[0].stp).r;
   gl_FragColor = vec4( scale_factor,1,1,v*scale_factor);
}
and here is gaussian one:

code:

uniform sampler3D tex3D;

uniform float scale_factor; 
uniform float gauss_a; 
uniform float gauss_b; 
uniform float gauss_c; 

void main(void)
{
   float v = texture3D(tex3D, gl_TexCoord[0].stp).r;
   float exponent = v - gauss_b;
   exponent *= exponent;
   exponent /= -2 *( gauss_c * gauss_c );
   float nv = scale_factor * gauss_a * exp( exponent );
   gl_FragColor = vec4( 1-v, v,1, nv);
}
How could I still use normal distribution function but avoid these artifact lines?

Entheogen fucked around with this message at 08:07 on Jul 30, 2008

StickGuy
Dec 9, 2000

We are on an expedicion. Find the moon is our mission.
The slicing technique is definitely fill-rate limited. There are some optimizations you can do such as clipping the quads to the data volume, but it's still limited by the number of slices through the data volume itself that you draw. You can experiment with the number of slices that you need to get a reasonable appearance. A reasonable rule of thumb is to have at least one slice pass through each voxel. You can also probably combine it with your previous technique to do some sort of subdivision to produce more partial slices in parts of the volume where things are visible and few or no slices where nothing is visible.

As far as your lines, they look a bit strange, but it's hard to tell what should and shouldn't be there. Can you post a screen shot of the linear shader where the lines are absent for comparison? Also, what are the dimensions of the volume you're visualizing? Are you using a floating point texture?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You could compensate for fillrate by having the shader sample multiple points within the volume, which would obviously involve some refactoring.

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
In my Quake 3 BSP loader I'm trying to get frustum culling to work but the results are far too overzealous. For example, take this window:

http://www.craigsniffen.com/bsp/asintended.png

When I alter my view a little bit, the U shaped frame is culled.

http://www.craigsniffen.com/bsp/malculling.png

At first I thought that I was computing the bounding boxes wrong, so I decided to draw them to the screen. Suprisingly the boxes appear to be in the right place (the box that I believe houses the U frame is highlighted in red). If that bounding box wasn't on the screen, I shouldn't be able to see it when I draw it, correct? Here's the relevant code in case there is a dumb error:

code:
public void RenderLevel(Vector3 cameraPosition, Matrix viewMatrix, Matrix projMatrix, GameTime gameTime, GraphicsDevice graphics) 
{ 
   BoundingFrustum frustum = new BoundingFrustum(viewMatrix * projMatrix); 
 
   BasicEffect beffect = new BasicEffect(graphics, null); 
   beffect.VertexColorEnabled = true; 
   beffect.World = Matrix.Identity; 
   beffect.View = viewMatrix; 
   beffect.Projection = projMatrix; 
 
   foreach(Q3BSP leaf in leafs) 
   { 
      if (!frustum.Intersects(leaf.Bounds)) 
      { 
         RenderBoundingBox(leaf.Bounds, beffect, graphics); 
         continue; 
      } 
       
      // add leaf to render list 
    } 
}

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

In my Quake 3 BSP loader I'm trying to get frustum culling to work but the results are far too overzealous.
As usual, try breaking the problem down. Try culling using just ONE plane perpendicular to the camera, that makes it pretty easy to tell if your box-plane cull is working properly since all of the culling will be done on a particular half of your screen. Another very useful tool in any sort of visibility optimization is to be able to lock the current visible set, so you can walk around and see exactly what it's doing.

(You are using plane-side culling, right?)

OneEightHundred fucked around with this message at 02:47 on Jul 31, 2008

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Entheogen posted:

I have successfully implemented the 3d slicing technique, however it appears to be rather slow. The way I calculate the inverse matrix also doesn't appear to be much of a factor here, as I only do it once, and then send it to vertex shader which actually multiplies it to tex coordinates. I think the limit here is imposed by my cards texture fill rate and the blending that it has to do between all slices. Also there appear to be some artifacts due to slicing. They appear as these lines that criss cross the volume.

I was wondering, however, do you think if I combine this with a technique I was doing earlier could help both the quality and the speed? What I am thinking about, is to generate space filling cubes again, but this time instead of giving them colors I could give them 3d texture coordinates. I am not sure how much faster it could be than what I am doing now, but it could possibly increase the visual quality. I will try this and report back.

Here is the screen shot:


There are 1000 slices here, and I am using my gaussian filter do isolate a certain data range as well as provide false coloring. You can see the artifact lines criss crossing the volume thou.

ok. it seems to be that these artifact lines only appear in gaussian shader, and not linear one. Here is source code for my linear fragment shader:

code:
uniform float scale_factor;
uniform sampler3D tex3D;

void main(void)
{
   float v = texture3D(tex3D, gl_TexCoord[0].stp).r;
   gl_FragColor = vec4( scale_factor,1,1,v*scale_factor);
}
and here is gaussian one:

code:

uniform sampler3D tex3D;

uniform float scale_factor; 
uniform float gauss_a; 
uniform float gauss_b; 
uniform float gauss_c; 

void main(void)
{
   float v = texture3D(tex3D, gl_TexCoord[0].stp).r;
   float exponent = v - gauss_b;
   exponent *= exponent;
   exponent /= -2 *( gauss_c * gauss_c );
   float nv = scale_factor * gauss_a * exp( exponent );
   gl_FragColor = vec4( 1-v, v,1, nv);
}
How could I still use normal distribution function but avoid these artifact lines?

That artifact pattern seems weird; I espect that it has something to do with the depth resolution of your texure and the number of slices you do. Still, if you have trilinear filtering working properly, I don't quite see why you should be seeing those artifacts (although I suppose it really depends on what your data looks like). Either way, its definitely a sampling error of some sort.

I'd agree that you should "compress" multiple slices into one by sampling your depth texture multiple times per shape. Instead of just sampling "TexCoord[0].stp", grab "TexCoord[0].stp + float3(0, 0, stepSize*n)" as well (for n steps). Obviously, you'd actually want to have that transformed into the proper space as for your 3D texture as well.

Entheogen
Aug 30, 2004

by Fragmaster

StickGuy posted:

The slicing technique is definitely fill-rate limited. There are some optimizations you can do such as clipping the quads to the data volume, but it's still limited by the number of slices through the data volume itself that you draw. You can experiment with the number of slices that you need to get a reasonable appearance. A reasonable rule of thumb is to have at least one slice pass through each voxel. You can also probably combine it with your previous technique to do some sort of subdivision to produce more partial slices in parts of the volume where things are visible and few or no slices where nothing is visible.

As far as your lines, they look a bit strange, but it's hard to tell what should and shouldn't be there. Can you post a screen shot of the linear shader where the lines are absent for comparison? Also, what are the dimensions of the volume you're visualizing? Are you using a floating point texture?

The volume is 128^3 data set of 4 byte floating point numbers. After reading it from file i normalize it to be between [0,1]. I think I know why the lines are there. Perhaps it is because the inside of some sub volume is a lot more transparent than its contours. The normal distribution function makes the inside look very empty, while the contours are made into glowy lines because they fall in the standard deviation.

Here is picture of results of linear shader at work:


Here is gaussian one at work from same perspective:


in each case there are 128 slices drawn.

I will play around more with fragment shader and some other techniques. Perhaps I could do away with fewer slices by shading each pixel according to what texels are between it and the next slice?

MasterSlowPoke
Oct 9, 2005

Our courage will pull us through
I'm trying to figure out the best process to use to render a BSP with DirectX. The traditional way to render a BSP is the find what polygons (leafs) are visible from the camera and render them as you find them. This works fine and dandy with OpenGL, but the relatively high cost of Draw calls with DirectX makes me weary.

I'm thinking that I should instead store each polygon in a buffer (delineated by the texture used) when visible leafs are found, then render each buffer at the end. Still, this seems to be a lot of work for the CPU to do each frame.

I'm also thinking of just making static buffers for each texture and rendering them all each frame, letting giving the hardware the task of culling nonvisible leaves in the vertex shader. I'm having troubles determining what to do with the lightmaps in that process. I suppose I could stitch all of the lightmaps into one large map, but them I'm limiting the number of lightmaps I could have (probably about 16 128x128 maps).

Any thoughts on how I should go about this? Also, do anyone find that GameDev.net's forums are absolutely worthless for getting help?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm trying to figure out the best process to use to render a BSP with DirectX. The traditional way to render a BSP is the find what polygons (leafs) are visible from the camera and render them as you find them. This works fine and dandy with OpenGL, but the relatively high cost of Draw calls with DirectX makes me weary.
Create a bitfield of visible surfaces. For every leaf in a visible cluster, mark the surfaces in that leaf as visible (surfaces can appear in multiple clusters!)

Iterate over the surface list to batch them. q3map already sorts geometry by texture and lightmap index so you can scan right through them and flush batches as soon as a change is detected.

If you're going to do everything with hardware shaders in your material system, create a static buffer and upload all of the drawvert data to that and just copy index ranges into an index buffer and flush the draw out when it fills up or a material/lightmap change is detected.

For transparent stuff it's a bit more difficult, there are a lot of ways to handle it, and many engines these days are lazy and don't even bother sorting it because it's not the hot poo poo it was when Quake 3 came out.

quote:

Also, do anyone find that GameDev.net's forums are absolutely worthless for getting help?
GameDev.net's forums are mostly aspiring programmers with no experience, so yes.

OneEightHundred fucked around with this message at 21:32 on Aug 3, 2008

Adbot
ADBOT LOVES YOU

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

GameDev.net's forums are mostly aspiring programmers with no experience, so yes.


So what are some good fora for getting OGL help?



My question is this:

I am working on implementing shadow mapping in some GLSL stuff i'm working on. I figured id use a GL_DEPTH_COMPONENT texture attached to a Frame Buffer Object and render the scene from my lights POV into that FBO.

What i want to do now is visualize the light's depth buffer (render it to a quad). And i am a bit clueless as to how i can read a GL_DEPTH_COMPONENT texture from inside a shader (using a texture sampler).

Any ideas?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply