|
I'm not a display list whiz, but your two versions don't really do the same thing. I'm not sure if display lists really have a meaning outside of glBegin. By which I mean stuff like glVertex doesn't work outside of a begin/end pair. I'd use VBOs though, as they tend to be more efficient on modern hardware. (as they are the path the driver developers care about)
|
# ¿ Sep 6, 2008 01:25 |
|
|
# ¿ Apr 26, 2024 06:06 |
|
Right, but how are you creating the lists? glVertex doesn't really mean anything outside of a begin/end pair - so the driver may be getting confused when you compile your display lists, or try to draw them. First thing that comes to mind for me, anyway.
|
# ¿ Sep 6, 2008 03:01 |
|
I think that's just worded badly. glVertex/Color/Normal, etc don't really mean anything by themselves - the display list will be optimized by the implementation, but in order to do that it needs to know what the data means. By not including begin/end, you aren't including that info so it can't really do anything. Most implementations let you turn off multithreading, try turning it off and seeing what happens.
|
# ¿ Sep 6, 2008 06:06 |
|
On windows, it's usually in the driver options. I know nvidia has it; not sure about ATI. On Mac, it's set programmatically (though that may be difficult if you're using perl)
|
# ¿ Sep 6, 2008 09:44 |
|
Everything will be decomposed to triangles eventually, so you should start with them. Constantly generating new display lists won't speed you up unless you re-use them a lot. You're almost certainly better off using a vertex buffer object and updating it.
|
# ¿ Sep 24, 2008 01:52 |
|
shodanjr_gr posted:Tried that, same thing (maybe sliiiiiiiiiiiiightly faster). How are you implementing the shadow mapping? What's your state look like? Are you copying the depth texture to the CPU memory and back to the GPU? Are you letting OpenGL generate the texture coordinates for you with regards to the shadow mapping? Pbuffers suck horribly and the intel chips are crap with FBOs (and in general). What happens if you just draw your scene twice, does performance still suck? Have you tried a profiler?
|
# ¿ Dec 20, 2008 00:27 |
|
Scarboy posted:I have a camera in OpenGL using the gluLookAt function that is working correctly. The camera rotates around a fixed point at the center of the screen. Is there any way i can lower the center point of the camera on the screen/viewport/window? You can just translate up and down to make it seem like the camera is higher or lower. I think you're misinterpreting how the math works out. In the end, after your transformations are applied, the camera is at 0,0,0 looking down the -z axis and everything else has been transformed relative to that. Try thinking about it as if the world is moving around the camera, instead of the camera moving through the world. EDIT: Alternately, you could adjust your viewport - but that might be weird. Spite fucked around with this message at 02:55 on Feb 25, 2009 |
# ¿ Feb 25, 2009 02:48 |
|
PnP Bios posted:I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders. Well, it's very incremental, but 3.0 _should_ have been like ES2.0 and removed all that crap. Especially since half of OpenGL isn't ever used, should never be used, and your computer's implementation doesn't support it anyway. As for 3.0 usage there really isn't a reason to use it yet since almost everything interesting can be done with an extension and no one wants to learn a new API until they have too. But it will never happen because, as you said, the legacy developers would go crazy. There are a lot (and I mean a lot) of really baaaaad OpenGL apps out there. Though I have to say: it is a royal pain in the rear end to get something up and running quickly in ES2.0.
|
# ¿ Jun 9, 2009 07:33 |
|
shodanjr_gr posted:Since we are talking about point sprites, is it possible to get to the point-sprite generated geometry inside a geometry shader? If you are using geometry shaders (and really, they kind of suck since they aren't very performant), why not extrude a single vertex into a quad yourself?
|
# ¿ Jun 18, 2009 07:44 |
|
Dijkstracula posted:So, I'm continuing on my search for a proper non-fixed-pipeline introduction to OpenGL 3.x, and so far I'm still inexplicably coming up short, with the exception of the OpenGL ES 2.0 programming guide. Is ES close enough to vanilla OpenGL that I can more or less s/E(GL\w+)/GL\1/ and swap a few header files in and out? ES2 is nothing like vanilla OpenGL, but will be close to OpenGL 3.2. OpenGL 3.0 doesn't exist. IT DOES NOT EXIST I TELL YOU. ahem. In ES2 you have to do almost everything by hand, to the point where you have to pass your matrices to your shaders yourself. There is no fixed function at all. This is a pain when you are starting out, since you have to write a ton of code just to get a quad on screen, but it is waaaay better in the long run since it removes stuff that really shouldn't still be in the API. Everyone says to stay away from Nehe because they are out of date and naive about how they do things. For example, in a real application you should never, ever, ever use Display Lists or call glBegin. All geometry should be put into a VBO and you should draw with that. It's ok for a very simple tutorial, but it really encourages bad habits, I feel. As for my previous comment on Geometry shaders, I mean that every implementation that's currently available sucks - especially under OpenGL. They're a great idea that has been very underwhelming to me thus far.
|
# ¿ Jul 2, 2009 01:19 |
|
Jo posted:How is the OpenGL experience in Java? Is it fairly similar to C++, or does the extra layer of indirection with memory management gently caress everything over? I think it's absolutely horrible, even if you find a binding that doesn't create some over-engineered OO paradigm.
|
# ¿ Jul 7, 2009 21:32 |
|
Strumpy posted:You are wrong. LWJGL is a great binding. Uses native buffers to imitate float pointers and the like and the API is for the most part a direct binding. There is some Display classes and stuff to get it started that are not GL specific but all the openGL code will be. Yes, but you are still using Java, which means you are adding a crapload of overhead to a type of programming that should be as efficient as possible. Crossing the boundary from the JVM into the GL library is a pretty expensive thing to do for every GL call.
|
# ¿ Jul 9, 2009 20:06 |
|
Unparagoned posted:I'm using opengl es for the iphone. I have a very simple model that exists in 3d space, the problem is that the visible parts depend on the order they are drawn rather than their position in 3d space. So say the person has a shield in their left hand, then from the left view, you see the shield and can't see the body since it's being blocked. Now from the right side, the body should still be seen, but it's not. You really need to sort your transparent objects from back to front. Even if you draw them last, you need to draw them in order otherwise they won't actually be drawn correctly since the blending depends on what is already in the source buffer. Check out the OpenGL Red and Blue books for details. Also keep in mind that blending is slow, especially on the iphone.
|
# ¿ Jul 21, 2009 22:27 |
|
krysmopompas posted:2 pass method That's a good technique, but you may not have the fill rate to burn on the phone. You can do a very crude sort and get a decent image (especially if you have lots of inter-frame coherency, as was mentioned), and it doesn't sound like you have too many objects in the scene - so that should be cheap.
|
# ¿ Jul 23, 2009 04:33 |
|
Luminous posted:Matrix stuff I'm not quite sure I understand what you are asking for, but if you can get a matrix you should be able to extract the values from it. The first column is the X axis, the second, the y, the third the z. The last is the position in that space. Is that what you are doing? Check out http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/lookat.html
|
# ¿ Aug 12, 2009 00:16 |
|
The1ManMoshPit posted:Does anybody know of an alternative to glReadPixels on the iPhone (OpenGL ES 1.5)? I'm profiling a section of my code that needs to read some data that I've rendered into an FBO and a quarter of my time is spent just copying data out with glReadPixels. This seems especially ridiculous since the iPhone's video memory is actually shared main memory iirc, so it seems like I should just be able to get a pointer to it somehow which would obviously speed my code up immensely. As a rule, you should never, ever use it. Ever. Really. What are you doing that requires the readback? Is there any other way you can do it?
|
# ¿ Sep 9, 2009 23:14 |
|
Femtosecond posted:I have a question that is sort of more a vector math question. I haven't had to deal with vector math for a few years and my old vector math text is sitting in a box at my parent's house so I'm not sure what to do. If you assume the rectangle is at 0,0,0, you can generate the 4 vertices for your rectangle by adding/subtracting. You can then use the orientation vector you have as part of the basis for a rotation matrix. (ie, if the rectangle is 'facing' down +Z, then you use 1,0,0 as the X and 0,1,0 as the Y, while the orientation vector is Z) Is that what you mean?
|
# ¿ Sep 11, 2009 08:14 |
|
haveblue posted:-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal. It's the inverse transpose of the upper 3x3 of the modelview. Of course, with a uniform scaling matrix, this is the identity transform. Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.
|
# ¿ Sep 24, 2009 01:33 |
|
Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.
|
# ¿ Sep 30, 2009 00:08 |
|
Bonus posted:So I made a simple 3ds loader (mostly by following this tutorial) for my cool car racing game. I load a 3ds model fine and then I load a .bmp texture file that I turn on before drawing the vertices of the model. But most 3ds (or obj) models that I've found don't come with .bmp textures. Are the textures encoded into the 3ds file itself? What's the usual (and simplest) way for loading a model with a nice texture in OpenGL? It doesn't even have to be in 3ds format, obj is fine too. There really isn't. You'll need to roll your own, or find a library. As for the textures, I'd assume they'd be part of a material list of some sort but I'm not familiar with that file format. (your link doesn't really go into it). It may only contain references to actual texture files. As for loading textures that aren't raw BMP, it's very dependent on your OS. OSX for example has ImageIO that can get the raw bits out.
|
# ¿ Apr 27, 2010 02:55 |
|
This has a crapload of info on the PSX: http://gshi.org/eh/documents/psx_documentation_project.pdf The thing doesn't do perspective correct texturing, so you don't have to worry about that. How familiar are you with rasterization in general? If not at all, start with Bresenham's line algorithm. Abrash has a ton of interesting stuff if you can find it. Foley and van Dam has a chapter on Raster algorithms too, but that book is way old.
|
# ¿ Apr 28, 2010 00:09 |
|
heeen posted:Can you cite anything for those claims? I'd love to read about it more in depth. It depends on the hardware and driver. Typically, changing the active shader is the most expensive (well, unless you are uploading a large constant buffer or something). The costs are way more apparent on the CPU side than the hardware side in most cases though (because of the validation, etc the runtime has to do. OpenGL is worse than DX in this regard because of all its fixed-function legacy stuff). Many games these days are still CPU bound (especially on the consoles) because stuff isn't batched well or merged well.
|
# ¿ Apr 28, 2010 19:50 |
|
Bonus posted:I optimized my terrain drawing by using display lists (since it doesn't change anyway). Now it runs fine. Is this acceptable or should I still look into vertex arrays and VBOs? Which OS? If OSX, use libgmalloc and gdb to find your error. You're probably passing a pointer to something that's too big or too small. I see a ton of errors in apps that give a bad pointer to stuff like glVertexPointer. And for the love of God, don't use display lists EVER. As said, use VBOs with STATIC_DRAW for your terrain. Chunk it up so you can cull out parts - the fastest triangle is the one you don't have to draw. You can still use one VBO and DrawRangeElements per piece. Note that VBOs don't necessarily HAVE to be in VRAM - the driver makes that decision and will page stuff on and off based on load and pressure. STATIC_DRAW hinted buffers will very likely stay resident on the card though. Geometry tends to use much less space than texture data, as well. EDIT: and now I realize most of this information is redundant with what's already been posted. Spite fucked around with this message at 09:23 on May 7, 2010 |
# ¿ May 7, 2010 09:18 |
|
heeen posted:Display List do have great performance, especially on nvidia hardware. The compiler does a very good job at optimizing them. But as soon as you're dealing with shaders things will start to get ugly because there are problems with storing uniforms etc. Display lists: that really depends on the platform, and the driver just makes them into VBOs anyway. The original idea of display lists is basically what DX11's deferred context paradigm is trying to get at, and even that has its problems. Everyone in the OGL world is trying to kill display lists, so it's really a bad idea to use them. (the problem being that you can't REALLY optimize them since they tell you nothing about the state at the time the list is created/used. Most CPU overhead is spent validating state and in associated costs - so long as you aren't sending lots of data to the GPU and converting between formats.) And if you still have VertexPointer,etc, I'd still use them. Optimizations can be made in the driver (ie, in clip space) if the driver knows what the position attribute, and the modelview and projection matrices are. This won't be true forever, definitely, but you only have a limited number of vtx attribs and they are definitely reserved if you aren't running OGL 3.0.
|
# ¿ May 8, 2010 07:47 |
|
Do not use Display Lists. They are deprecated and disgusting. Most drivers will convert then to VBO/VAO under the hood anyway. They DO NOT help performance in the way most people think - because of their design the driver can't cache state and validation work, which is what takes all the time anyway. Use VBO and put everything you can into VRAM. Keep in mind that stuff may be paged on and off the card as the driver needs. Use as few draw calls as necessary - if your hardware supports instancing, use that. As for UBO, that spec is a mess. It's probably not that much faster than making a bunch of uniform arrays and updating those - although you can't updates pieces of it that way. You can also try gpu_program4 and just update the constant arrays.
|
# ¿ May 14, 2010 00:02 |
|
haveblue posted:You can't read directly from the depth buffer, you have to bind the previously rendered depth buffer as a texture and render the shader output into a different target. Yeah - do a Z-prepass with color writes turned off. You might also be able to do something by mucking with the depth test and blending, but using a shader will be more straightforward.
|
# ¿ May 14, 2010 12:17 |
|
Don't use copytex. Attach a depth texture to an FBO, render a z-prepass into it. Then bind that texture, and draw into a different FBO with the shader that reads the depth value. You can also turn off depth writes, turn on color writes and use the same FBO. Also, don't use GL_LUMINANCE - use ARB_depth_texture. GL_DEPTH_COMPONENT_24 and GL_DEPTH_COMPONENT are the <internalformat> and <format>, respectively.
|
# ¿ May 15, 2010 07:44 |
|
No, you can't draw to both the backbuffer and an FBO, nor should you want to. You can render to multiple color attachments via FragData. It's much better to get into the habit of rendering to an FBO and then blitting that to the screen. The iPhone, for example, requires you to render into a renderbuffer and then give that to the windowing system to present. You can just draw a fullscreen quad with an Identity projection matrix - that also allows you to do most processing effects easily.
|
# ¿ May 15, 2010 23:55 |
|
haveblue posted:To be pedantic, I think that's a property of OpenGL ES, not the iPhone specifically. Well, if you mean that there's no backbuffer and all rendering must be done into an FBO, then sure. However, it would be nice if you could present a texture instead of a renderbuffer, etc. ultra-inquisitor: Pass your modified 'pos' as a varying and set the output red channel to w and see what it's being set to. I agree with OneEightHundred though, it sounds like the shader isn't bound.
|
# ¿ May 16, 2010 07:55 |
|
I'm confused as to what you are asking, but doing say you combine the rotation and translation matrices of the first bone into M1. M1 = T1*R1. Then the end of your tentacle is at V1 = M1 * V0. You can Rotate the next bone around its center via M2 = M1 * R2. Or you can rotate the transformed point via M2 = R2 * M1. Which do you want? Or you can just make the matrix yourself Think of the first 3 columns as the axes of a coordinate space. (the first is the x-axis, the second the y-axis, etc) That lets you define which direction the bone is facing. The last column is position. Spite fucked around with this message at 06:10 on May 17, 2010 |
# ¿ May 17, 2010 06:07 |
|
YeOldeButchere posted:A little while ago I asked about the performance of branching in shaders, thanks for the answers by the way, but I'd like to know where to find that sort of info myself. I'm guessing this is GPU dependent and outside the scope of specific APIs which would explain why I don't recall anything about this mentioned in the D3D10 documentation. A quick google search doesn't seem to return anything useful either. I've found some info on NVIDIA's website but it's mostly stuff about how some GPU support dynamic branching and very little about performance. Ideally I'd like something that goes in some amount of detail about modern GPU architectures so I can really know why and when it's not a good idea to use branching in shaders, preferably with actual performance data shown. That's all really proprietary stuff, so I doubt you'd get the actual numbers. On modern cards, I think it's around the order of 2000-4000 pixels for branching and discard. So if you can reasonably assume a block of that size will go down that path, you'll get the branch prediction benefit. Prediction misses are really expensive, so if you have a very discontinuous scene (in terms of code path), your perf will tank. Of course, testing is the best way to determine this, as you say.
|
# ¿ May 17, 2010 21:10 |
|
UraniumAnchor posted:edit: Never mind, now I'm wondering why the hell noise() is always returning 0.0. Apparently it's designed that way. What the christ good is that? As a note, no GPU implements noise() in hardware. So don't ever use it.
|
# ¿ May 19, 2010 00:09 |
|
heeen posted:
You're using CG? It tries to emulate directx by offering "profiles" - unfortunately they don't expose all the card's features (for example, it requires gpu_program4 instead of gpu_shader4). What are you trying to use texture_gather for? Shadow Maps?
|
# ¿ May 24, 2010 19:14 |
|
I'm not sure what your skill level is, but all textures need Texture Coordinates, which are values that determine the texture mapping. A mapping from 0 - 1.0 will give the entire texture in a specific dimension, whereas 0 - 0.5 will give half of it, etc. Usually this means you have two coordinates for the x and y dimensions of a quad. They are usually called s and t. Do you know how VBOs work? You'll need to set up your data in VBOs (please don't use Begin/End). You'll also need to Gen and upload the data to the GPU via TexImage2D. You can use glTexParameteri to set the filtering modes. There are Magnification filters (to magnify) and minification filters (to minify). GL_LINEAR is bilinear filtering and GL_LINEAR_MIPMAP_LINEAR is trilinear (which only makes sense for a minification filter). Then setup the VBO: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexPointer(3, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, tcVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); turn on state: glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); bind the texture: glBindTexture(GL_TEXTURE_2D, tex); glEnable(GL_TEXTURE_2D); draw: glDrawArrays(GL_QUADS, 0, 4); Try the Superbible/blue book instead of the red book, because the red book is old.
|
# ¿ Jun 3, 2010 09:20 |
|
Dijkstracula posted:Sadly, the first half of the Superbible still uses immediate mode / BEGIN/END, so it's only marginally better That's because Benj has decided he's never writing a book again. I'd love to spend a few months writing a really good tutorial, but I think my employer may frown on that. As for shadows, it depends on what you are targeting and what you are doing. Very old stuff and mobile phones with little VRAM will probably benefit from Shadow Volumes if you want to save memory. They also tend to have crappy fill-rate, which puts you in a bind since that technique burns fill. Shadow mapping is the most common implementation, but keep in mind it's harder to have a light the player can walk all the way around cast correct shadows. Think of a brazier or campfire - since you have to draw from the light's POV, you'd have to draw the scene 4-6 times to get all the info you need to do the shadowing. Or you could try something wonky with a 360 degree FOV, but I've never attempted that myself. Or just have certain objects be casters and only draw in that direction.
|
# ¿ Jun 7, 2010 09:01 |
|
On Learning 3D: Really, though, it's better to learn the fundamentals of 3d graphics and real-time rendering and not tie yourself to an API. Once you get to the API level the rules of thumb are pretty simple: Put everything into vertex buffers Try to keep your scenes smaller than VRAM so you don't have to page Do minimal draw calls (cull out stuff that's not visible) Sort everything so you have as few state changes as possible - especially shader and rendertarget state (this is really freaking important) There's more, of course, but that's a decent start.
|
# ¿ Jun 7, 2010 09:06 |
|
If you have a membership, I highly recommend checking out the WWDC OpenCL videos. As for the guy that wrote those - he worked with the OpenCL/GL team last year for a WWDC presentation. That's the Molecule demo if any of you went to the 09 conference. All that info is the result of that work.
|
# ¿ Jun 24, 2010 03:52 |
|
I hate to be the guy that's all like "READ THE SPEC" but it's pretty well defined: http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf From Page 186: The image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice So you've got the data backwards. It's talking about glTexImage3D here because the spec is a mess, but a 2d texture is basically a 3d texture with no depth.
|
# ¿ Jul 2, 2010 09:33 |
|
I'm not familiar with glfw, but I'd stay away from GLUT. SDL also works. You're probably better off handling context creation/destruction yourself in general. Most image formats dictate endianness, so you shouldn't have to worry about that. OSX has the ImageIO library that will handle most formats. Model loading and animation is the hard part. And the usual caveats: Use vertex buffer objects and frame buffer objects Batch your state together and change state as little as possible That said, don't make things overcomplicated
|
# ¿ Jul 5, 2010 23:09 |
|
|
# ¿ Apr 26, 2024 06:06 |
|
The thing about the Superbible is that there are 3 authors, with different parts by each. The original (super old) book was written by Richard Wright. Then Benj Lipchak wrote the shader/more recent stuff. The forth edition (whenever it became the blue book) has another guy, but I don't know him. It's not a bad place to start, but OpenGL is in an odd place right now and a new book would hopefully cover 4.0 - except that no one has actually written a 4.0 app, so who knows! For learning, you're better off at tutorials or asking here.
|
# ¿ Jul 16, 2010 01:35 |