|
Well it works fine without those lines when it is just one vertex attribute, but once a second one such as colour is added then it no longer works properly but does work as I would expect it to once the variables are bound. I can only assume that there are some default values set in opengl that just worked by chance when there is only one vertex attribute pointer. Perhaps the fragment shader would work the same way, breaking if I added another output and didn't bind the outputs to buffers. I'm still wondering why it would display anything at all without binding the variables, shouldn't it be designed in such a way that if a variable is not bound then nothing happens.
|
# ? Aug 24, 2011 16:48 |
|
|
# ? Apr 26, 2024 06:49 |
|
Having failed to make point sprites do quite what I wanted, and after reading about point sprites misbehaving in various ways, I started throwing together my own little billboarding shader for my particular point sprites. I figured I can use my usual 'sprite' indices (so I only need 4 vertices per billboard), and just put four vertices all in the same place, using texture coordinates to inform the shader which corner the vertex is supposed to be (and at the same time using them for the texture). So I made this simple vertex shader to test the idea: code:
Just to be clear, all four vertices of each quad have the exact same position, and different texture coordinates. And the four corners each have different texture coordinates but are the same between quads. But the quad centered at [0.5,0.5f,0.1f] (all four vertices have that position) comes out much larger than the quad centered at [0.5,0.5,0.9f] (The darker blue square at the back is the axis-aligned rectangle from [0,0,1] to [1,1,1]) What am I misunderstanding?
|
# ? Sep 7, 2011 01:15 |
|
Adding one more piece of information, if I set Output.Position.z=0.5f; (or whatever constant value) just before the vertex shader returns then all the rectangles are the same size (but also all the same position in the z buffer), so I'm definitely not making some weird mistake with what I'm setting the other coordinates to. Mind, what I'm getting is the effect that I want, but I was expecting to have to do significantly more implementation to get it, and I feel like I might be experiencing something weird that would differ on other hardware. Am I misunderstanding what the vertex shader POSITION output is supposed to be? Is it something other than render-area coordinates (like screen coordinates but not necessarily the whole screen) and depth buffer depth?
|
# ? Sep 9, 2011 06:03 |
|
They're still 3d values, they're just in terms of the camera instead of in terms of the world. You look like you're using a perspective projection, so increasing z-distance should make them smaller. It's when you're using an orthographic projection that z-distance doesn't change how big stuff looks. Then everything is flat and doesn't change size based on distance. It's not anything specific to the vertex shader itself.
|
# ? Sep 9, 2011 09:28 |
|
roomforthetuna posted:Am I misunderstanding what the vertex shader POSITION output is supposed to be? Is it something other than render-area coordinates (like screen coordinates but not necessarily the whole screen) and depth buffer depth? After projection the coordinates are homogenous, that is they have a w-component that doesn't necessarily equal 1. The screen-space position is actually (x/w, y/w, z/w). Here's an article: http://www.teamten.com/lawrence/graphics/homogeneous/ So to adjust the positions for uniform size in the vertex shader I guess you'd want to be using: code:
|
# ? Sep 9, 2011 11:58 |
|
Aha, thank you! I had never understood what the w coordinate was about, and that explains it perfectly. Oh, except wait, if the thing distorting the screen-space result is Output.Position.w, then why does changing Output.Position.z after transforms were applied still change the size?
|
# ? Sep 9, 2011 15:30 |
|
roomforthetuna posted:Aha, thank you! I had never understood what the w coordinate was about, and that explains it perfectly. .w and .z map to the same value; It's just called w sometimes to differentiate coordinate systems more clearly. (afaik) Unormal fucked around with this message at 16:05 on Sep 9, 2011 |
# ? Sep 9, 2011 16:02 |
|
Unormal posted:.w and .z map to the same value; It's just called w sometimes to differentiate coordinate systems more clearly. (afaik)
|
# ? Sep 9, 2011 16:20 |
|
I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.
|
# ? Sep 9, 2011 19:24 |
|
FlyingDodo posted:I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way. Why are you sharing the same GL entities among multiple instances of your own class? There's probably a better way to do this.
|
# ? Sep 9, 2011 19:30 |
|
FlyingDodo posted:I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way. I don't think this is the best way to approach it; but if you're going to do it this way, the classic way would be a reference count to some external object, incremented when something uses it, and decremented when something releases it, and only actually actually release the resources when the reference (use) count reaches 0.
|
# ? Sep 9, 2011 20:01 |
|
What would be the better way to make opengl more object oriented?
|
# ? Sep 9, 2011 20:29 |
|
If you wrap up the OpenGL entities in their own separate objects, you can have other objects reference the GL objects (by index into a manager or something) without having to handle creating and destroying the GL objects.
|
# ? Sep 9, 2011 22:22 |
|
ZombieApostate posted:If you wrap up the OpenGL entities in their own separate objects, you can have other objects reference the GL objects (by index into a manager or something) without having to handle creating and destroying the GL objects. This is handy because it allows for, eg. loading a data file of meshes which knows the names of the textures it wants, then I can check the list to see if the required texture is already loaded, load it if not, and store the index with the mesh; in the event of everything getting dumped (as happens if you control-alt-delete or various other conditions) the list still knows the names of textures and other data files so it can reload the data without the game code having to intervene. (Game code still has to regenerate any generated data that's stored on the graphics card though.) And now I'm a bit sad that I didn't think to include "function pointer" as one of the possible data sources in my manager, so that the manager could handle regenerating generated data too. But the way I have it I just don't store my generated data in the manager. It's also good to store a reference count so that unloading a mesh knows whether it should also unload a dependency texture.
|
# ? Sep 9, 2011 23:11 |
|
One minor consideration if that if you're going to thread things at all, 3D APIs typically get angry if you do things from a thread other than the active one and the thread-switching calls (i.e. wglMakeCurrent) can get expensive. In other words, don't destroy your buffers in the thread that makes it refcount to zero, have the rendering thread destroy them.
|
# ? Sep 10, 2011 00:43 |
|
FlyingDodo posted:I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way. I think the best solution to your problem is to not actually allow copies of the object at all, as it doesn't make sense to have more than one object in memory for the same resource. You're much better off just using smart pointers or having an object manager that controls when objects are deleted (Say on a level change). roomforthetuna posted:This is approximately what I do with my DirectX things - I have a singleton containing an extendable array of each of the actual [texture/mesh/whatever] handles, then my code references them by array index so it's a nice quick lookup. There's some trickiness with needing to reallocate over deleted objects rather than collapsing the array down (which would invalidate or confuse all the higher IDs), but it's not too bad. I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array? Keeping an ID rather than just a pointer just seems like it's adding an extra unnecessary layer of indirection, since getting accessing simple information has to be done through the singleton rather than just simply using a method on the object itself. Not to mention with pointers you wouldn't have to worry about any of the ID management you have to deal with at the moment. Plus using smart pointers is generally more reliable than manually updating the reference counts.
|
# ? Sep 10, 2011 17:20 |
|
AntiPseudonym posted:I think the best solution to your problem is to not actually allow copies of the object at all, as it doesn't make sense to have more than one object in memory for the same resource. You're much better off just using smart pointers or having an object manager that controls when objects are deleted (Say on a level change). Well, if you have a bunch of actual pure pointers to object X, you can never replace object X. Imagine if you want to dynamically load/unload the underlying resource, or replace it with a different level of detail model, for example. If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. If all of your parent-objects just say "give me object 8 out of that array" (basically) then you can load/unload or replace object 8 any time. There are times that direct pointers can be better, but they're not obviously better for all use cases.
|
# ? Sep 10, 2011 17:23 |
|
Unormal posted:Well, if you have a bunch of actual pure pointers to object X, you can never replace object X. Imagine if you want to dynamically load/unload the underlying resource, or replace it with a different level of detail model, for example. If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. If all of your parent-objects just say "give me object 8 out of that array" (basically) then you can load/unload or replace object 8 any time. We're talking about a pointer to a wrapper, though, not a direct pointer to the D3D object. You could just switch over the object within the wrapper and everything else will start using the new resource, same as replacing the object in the array. AntiPseudonym fucked around with this message at 17:32 on Sep 10, 2011 |
# ? Sep 10, 2011 17:30 |
|
AntiPseudonym posted:We're talking about a pointer to a wrapper, though, not a direct pointer to the D3DTexture. You could just replace the actual pointer within the wrapper and everything else will start using it, same as replacing the object in the array. Ah, I see what you're suggesting. I think that approach would work as well, I just think it's a less conventional approach. Frankly it'd probably be a better way of thinking for vastly multi-threaded environments. Unormal fucked around with this message at 17:36 on Sep 10, 2011 |
# ? Sep 10, 2011 17:33 |
|
I'm starting to think a way to deal with it is to disable copying. I found this blog post: http://jrdodds.blogs.com/blog/2004/04/disallowing_cop.html So just have private copy constructor and operator= without implementing it. Good idea, bad idea? It seems like the easiest way to deal with it because it means I can keep my code as-is and have the opengl object deleted in the destructor. This wouldn't even compile if there is an attempt to copy an object.
|
# ? Sep 10, 2011 17:55 |
|
AntiPseudonym posted:I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array? Not saying the wrapper wouldn't work, you could have wrappers in some sort of std::list or std::map and it'd make it easier to manage removal on unload than an array. So the real reason is probably "because I didn't think it through fully". But I don't think wrappers would be that much of an advantage, because either way you still end up with a singleton list. Actually, the array system does have one other advantage for me, but it's not a generalizable thing - my objects are indexed by fileindex and objectindex, which combined are a 32-bit value; it makes it easy for the file to contain references to other objects in the same file, or objects from a specific other file. You can't link Mesh_X to (pointer) inside the data file, but you can link it to an index. Again, not saying you couldn't work around this and convert indices to pointers while loading, but it's a convenience of arrays that isn't present with pointers, that you can predict an index for a thing that isn't yet loaded.
|
# ? Sep 10, 2011 18:11 |
|
roomforthetuna posted:You'd still need to keep a list of all your wrapper pointers wouldn't you? For, eg. when DirectX loses all the resources and you have to reinitialize them all, or for when you want to check whether a particular resource by name is already loaded. There are definitely good reasons to have a big list o' pointers hanging around for sure. quote:But I don't think wrappers would be that much of an advantage, because either way you still end up with a singleton list. It's more of a convenience/bulletproofing thing. A container of pointers has a lot more freedom to chop and change than an array of IDs does without all of the collapsing/reshuffling you have to do currently. It also just seems cleaner to have all the relevant information about an object to actually be immediately accessible to the object itself rather than constantly going through an intermediary. Also it becomes a LOT easier to ditch the singleton if the objects are in charge of themselves. The only things that should know about your Big-rear end Resource List should be anything that's going to going to be adding new resources or doing operations on the whole list (Which honestly shouldn't be that much at all, maybe just your renderer and mesh loader), so you can just pass in a pointer to your ResourceManager to whatever methods require it. AntiPseudonym fucked around with this message at 19:02 on Sep 10, 2011 |
# ? Sep 10, 2011 18:56 |
|
I have a question about OpenGL / VBOs: I have code that creates a geodesic sphere by subdivision. It works fine, although there are tons of duplicated vertices (something like 6000 unique, 60000 with duplicates). Right now my dumb way of figuring out if a new index should be created is comparing the new vertex to be added to every vertex previously in the array, and if it finds the same vertex it appends the index to the index list. Otherwise, it adds the vertex to the vertex list and creates a new index. Is this stupid/is there something fundamentally wrong with how I'm creating the sphere? It's frustrating because a lot of OpenGL examples are in immediate mode / inefficient to be explicit / use antiquated things like display lists. edit: Is there somewhere to look for OpenGL best practices? Is this something the red book would have?
|
# ? Sep 10, 2011 19:08 |
|
Unormal posted:If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. or use placement new (don't do this)
|
# ? Sep 10, 2011 20:11 |
|
passionate dongs posted:Is this stupid/is there something fundamentally wrong with how I'm creating the sphere?
|
# ? Sep 10, 2011 21:13 |
|
passionate dongs posted:edit: Is there somewhere to look for OpenGL best practices? Is this something the red book would have? There's really only the most basic of "best practices" out there. You can look at: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide and the more recent WWDC talks on it. Also keep in mind that it varies (sometimes quite heavily) by GPU vendor and platform.
|
# ? Sep 16, 2011 01:28 |
|
I recently upgraded my computer, and of relevance is the graphics card. I had a radeon 9700 and now have a radeon hd 6970. I have noticed that it seems like tjunctions are no longer an issue. On the old card if I rendered anything in opengl which had tjunctions there would be very visible and hideous glitches. So I added in code to add in additional vertices where required to fix them. With my new graphics card they don't seems to appear at all. Is is still necessary to make sure a polygon mesh has no tjunctions, along with things like vertex welding?
|
# ? Sep 16, 2011 17:39 |
|
Yes, you're just lucky that it doesn't look as bad in your specific case. T-junctions are always a bad thing.
|
# ? Sep 16, 2011 18:15 |
|
Is there some way in shaders to interpret the pixel that's 'behind' the pixel you're rendering? Specifically, a pixel shader can, eg. say "this pixel has 50% alpha", which then blends the color appropriately with the color of the pixel behind. Is there some way to say, for example, "I want to blend with the inverse of the pixel behind"?
|
# ? Sep 22, 2011 16:16 |
|
Blend shaders don't exist yet, the "best" way to do custom blends is MRT render to a texture and the screen at the same time, then use the texture in the pixel shader of later operations. If you want to invert the color of the destination then you might be able to use ONE_MINUS_DST_COLOR
|
# ? Sep 22, 2011 16:28 |
|
So, I'm working on a simple program in OpenGL, with a pretty simple shader. My vertex shader has the following code:code:
code:
|
# ? Sep 28, 2011 01:32 |
|
#version 130, not 330
|
# ? Sep 28, 2011 02:13 |
|
Thanks, but I just figured it out. Turns out it was much dumber...the text files I was editing in vs2010 were separate from what the program was physically referencing. One of those, "Oh, I'm dumb" moments. Thanks for the quick response though!
|
# ? Sep 28, 2011 02:30 |
|
Not really a programming question, but I'd like to know what you know-more-than-I-dos think. I remember awhile back some ATi guy saying something like how much it'd be better if developers were able to directly program to the GPU rather than depending on the graphics company's implementation of a library, with a chorus of major engine developers saying that would be pretty awesome, then the ATi guy immediately back peddling and going "HEY WHAT I MEANT TO SAY WAS OH BOY ISN'T DIRECTX JUST GREAT?" Has there been much consideration about going towards this route, given that graphics companies agree to some common instruction set? Would there be too many drawbacks? Given that everyone agrees on a instruction set, we could still have all those libraries except the development of them could be more transparent; more than that, developers don't have to be so restricted to them. Is this just a case where its what everyone wants, its just nVidia and AMD/ATi wouldn't go along with it? Or is this thinking flawed?
|
# ? Oct 26, 2011 22:52 |
|
It's flawed because developers don't want it either. See CTM on the ATI side for a nice example, it's really specific to the actual card you're doing anything on. CUDA was more lenient, but developers are annoyed enough at having to vary renderer behavior by hardware when they DO use the same API. Making card-specific behavior works great when your audience uses one card. That isn't the PC market. Making card-specific behavior there means you're sinking development resources into a fraction of your audience.
|
# ? Oct 26, 2011 23:17 |
|
OneEightHundred posted:Making card-specific behavior works great when your audience uses one card. That isn't the PC market. Making card-specific behavior there means you're sinking development resources into a fraction of your audience. Or, if you're Epic and are willing to sink a lot of effort into writing low-level engine code so that you can sell the engine at a markup to a gaggle of independent developers. But no, in general it's not a good idea because GPUs change hardware way faster than CPUs do right now, and the major vendors are very different under the hood in a lot of important ways, so it would be hard to come together with a fully shared ISA.
|
# ? Oct 27, 2011 02:31 |
|
OneEightHundred posted:Making card-specific behavior there means you're sinking development resources into a fraction of your audience. Or it means you're targeting a console, where doing this is much more practical and encouraged and one of the big reasons a console is viable for much longer than a PC with the equivalent specs on paper.
|
# ? Oct 27, 2011 06:10 |
|
Hubis posted:Or, if you're Epic and are willing to sink a lot of effort into writing low-level engine code so that you can sell the engine at a markup to a gaggle of independent developers. haveblue posted:Or it means you're targeting a console, where doing this is much more practical and encouraged and one of the big reasons a console is viable for much longer than a PC with the equivalent specs on paper. There are other reasons too, a large part of it is that developers aren't terrifically interested in scalability at the moment.
|
# ? Oct 27, 2011 15:07 |
|
OneEightHundred posted:Epic would rather spend their time writing code with tangible benefits to as many users as possible too. The fact that it affects more users does not change the fact that, relative to other features they could be implementing, it isn't a very productive use of time. My point was that the people you hear advocating it are all developers who have middleware to sell -- it would be a HUGE investment of time and effort to implement a flexible, multi-platform low-level engine, but could also provide them with performance that would give them a competitive advantage over developers with less resources to implement and maintain such a codebase. Remember back when Mark Rein was out stomping for Larrabee? But that's neither here nor there, because this will never happen. CUDA/OpenCL with some limited access to fixed-function units is probably as close as you'll ever get.
|
# ? Oct 27, 2011 15:25 |
|
|
# ? Apr 26, 2024 06:49 |
|
Hubis posted:Remember back when Mark Rein was out stomping for Larrabee? quote:CUDA/OpenCL with some limited access to fixed-function units is probably as close as you'll ever get. All of this is very different from saying that anyone wants a direct hardware tap though. There's a reason that DirectCompute and OpenCL exist. OneEightHundred fucked around with this message at 17:09 on Oct 27, 2011 |
# ? Oct 27, 2011 17:05 |