Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FlyingDodo
Jan 22, 2005
Not Extinct
Well it works fine without those lines when it is just one vertex attribute, but once a second one such as colour is added then it no longer works properly but does work as I would expect it to once the variables are bound. I can only assume that there are some default values set in opengl that just worked by chance when there is only one vertex attribute pointer. Perhaps the fragment shader would work the same way, breaking if I added another output and didn't bind the outputs to buffers. I'm still wondering why it would display anything at all without binding the variables, shouldn't it be designed in such a way that if a variable is not bound then nothing happens.

Adbot
ADBOT LOVES YOU

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
Having failed to make point sprites do quite what I wanted, and after reading about point sprites misbehaving in various ways, I started throwing together my own little billboarding shader for my particular point sprites. I figured I can use my usual 'sprite' indices (so I only need 4 vertices per billboard), and just put four vertices all in the same place, using texture coordinates to inform the shader which corner the vertex is supposed to be (and at the same time using them for the texture).

So I made this simple vertex shader to test the idea:
code:
VS_OUTPUT RenderPointsVS( float4 vPos : POSITION, float2 tex : TEXCOORD0 )
{
    VS_OUTPUT Output;
    Output.Position=mul(vPos,g_mWorldViewProjection); //transform to screen space
    Output.Position.x=Output.Position.x + tex.x*0.02f;
    Output.Position.y=Output.Position.y - tex.y*0.02f;
    Output.TexCoord=tex;
    return Output;    
}
The thing I really don't get is that when I render a line of 10 of these quads going into the distance (my camera is pointing at 30 degrees or so, so I can see them all), they are positioned where I would expect, but they get smaller as they get further away. Shouldn't they all be the same size, since I'm stretching out the quad (always by the same amounts) after all the camera/projection transforms? The z-coordinate shouldn't be making a difference to the size then, should it?

Just to be clear, all four vertices of each quad have the exact same position, and different texture coordinates. And the four corners each have different texture coordinates but are the same between quads.
But the quad centered at [0.5,0.5f,0.1f] (all four vertices have that position)
comes out much larger than the quad centered at [0.5,0.5,0.9f]


(The darker blue square at the back is the axis-aligned rectangle from [0,0,1] to [1,1,1])

What am I misunderstanding?

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
Adding one more piece of information, if I set Output.Position.z=0.5f; (or whatever constant value) just before the vertex shader returns then all the rectangles are the same size (but also all the same position in the z buffer), so I'm definitely not making some weird mistake with what I'm setting the other coordinates to.

Mind, what I'm getting is the effect that I want, but I was expecting to have to do significantly more implementation to get it, and I feel like I might be experiencing something weird that would differ on other hardware.

Am I misunderstanding what the vertex shader POSITION output is supposed to be? Is it something other than render-area coordinates (like screen coordinates but not necessarily the whole screen) and depth buffer depth?

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
They're still 3d values, they're just in terms of the camera instead of in terms of the world. You look like you're using a perspective projection, so increasing z-distance should make them smaller. It's when you're using an orthographic projection that z-distance doesn't change how big stuff looks. Then everything is flat and doesn't change size based on distance. It's not anything specific to the vertex shader itself.

Facejar
Apr 28, 2008

roomforthetuna posted:

Am I misunderstanding what the vertex shader POSITION output is supposed to be? Is it something other than render-area coordinates (like screen coordinates but not necessarily the whole screen) and depth buffer depth?

After projection the coordinates are homogenous, that is they have a w-component that doesn't necessarily equal 1. The screen-space position is actually (x/w, y/w, z/w). Here's an article: http://www.teamten.com/lawrence/graphics/homogeneous/

So to adjust the positions for uniform size in the vertex shader I guess you'd want to be using:
code:
Output.Position.x=Output.Position.x + tex.x*0.02f*Output.Position.w;
Output.Position.y=Output.Position.y - tex.y*0.02f*Output.Position.w;

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
Aha, thank you! I had never understood what the w coordinate was about, and that explains it perfectly.

Oh, except wait, if the thing distorting the screen-space result is Output.Position.w, then why does changing Output.Position.z after transforms were applied still change the size?

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

roomforthetuna posted:

Aha, thank you! I had never understood what the w coordinate was about, and that explains it perfectly.

Oh, except wait, if the thing distorting the screen-space result is Output.Position.w, then why does changing Output.Position.z after transforms were applied still change the size?

.w and .z map to the same value; It's just called w sometimes to differentiate coordinate systems more clearly. (afaik)

Unormal fucked around with this message at 16:05 on Sep 9, 2011

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Unormal posted:

.w and .z map to the same value; It's just called w sometimes to differentiate coordinate systems more clearly. (afaik)
Aha, w just being "z that's already been transformed into homogenous". Thanks again, now it all makes sense. I'd only ever really dealt with the coordinate systems of "ortho projection so z is just a depth buffer value" and "non-ortho before feeding it through the worldviewproj matrix" so I had no idea that the post-transform non-ortho coordinates worked this way.

FlyingDodo
Jan 22, 2005
Not Extinct
I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.

haveblue
Aug 15, 2005



Toilet Rascal

FlyingDodo posted:

I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.

Why are you sharing the same GL entities among multiple instances of your own class? There's probably a better way to do this.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

FlyingDodo posted:

I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.

I don't think this is the best way to approach it; but if you're going to do it this way, the classic way would be a reference count to some external object, incremented when something uses it, and decremented when something releases it, and only actually actually release the resources when the reference (use) count reaches 0.

FlyingDodo
Jan 22, 2005
Not Extinct
What would be the better way to make opengl more object oriented?

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
If you wrap up the OpenGL entities in their own separate objects, you can have other objects reference the GL objects (by index into a manager or something) without having to handle creating and destroying the GL objects.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

ZombieApostate posted:

If you wrap up the OpenGL entities in their own separate objects, you can have other objects reference the GL objects (by index into a manager or something) without having to handle creating and destroying the GL objects.
This is approximately what I do with my DirectX things - I have a singleton containing an extendable array of each of the actual [texture/mesh/whatever] handles, then my code references them by array index so it's a nice quick lookup. There's some trickiness with needing to reallocate over deleted objects rather than collapsing the array down (which would invalidate or confuse all the higher IDs), but it's not too bad.

This is handy because it allows for, eg. loading a data file of meshes which knows the names of the textures it wants, then I can check the list to see if the required texture is already loaded, load it if not, and store the index with the mesh; in the event of everything getting dumped (as happens if you control-alt-delete or various other conditions) the list still knows the names of textures and other data files so it can reload the data without the game code having to intervene. (Game code still has to regenerate any generated data that's stored on the graphics card though.)

And now I'm a bit sad that I didn't think to include "function pointer" as one of the possible data sources in my manager, so that the manager could handle regenerating generated data too. But the way I have it I just don't store my generated data in the manager.

It's also good to store a reference count so that unloading a mesh knows whether it should also unload a dependency texture.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
One minor consideration if that if you're going to thread things at all, 3D APIs typically get angry if you do things from a thread other than the active one and the thread-switching calls (i.e. wglMakeCurrent) can get expensive.

In other words, don't destroy your buffers in the thread that makes it refcount to zero, have the rendering thread destroy them.

AntiPseudonym
Apr 1, 2007
I EAT BABIES

:dukedog:

FlyingDodo posted:

I'm not sure if this should be a general programming or here. I am trying to make an opengl renderer in c++ and I want to make as much use of OOP as possible. This runs into some problems. For example if I have any class that stores an opengl object id (any texture, vertex array object,buffer) in it and the destructor unloads it from opengl then any copy of the object will have an invalid id. What would be the best way of dealing with this? All I can think of is have the destructor do its thing, and have the copy constructor actually create another opengl object which I think would be slow and pointless. I can't be the first person to wrap opengl into objects, so there must be a better way.

I think the best solution to your problem is to not actually allow copies of the object at all, as it doesn't make sense to have more than one object in memory for the same resource. You're much better off just using smart pointers or having an object manager that controls when objects are deleted (Say on a level change).

roomforthetuna posted:

This is approximately what I do with my DirectX things - I have a singleton containing an extendable array of each of the actual [texture/mesh/whatever] handles, then my code references them by array index so it's a nice quick lookup. There's some trickiness with needing to reallocate over deleted objects rather than collapsing the array down (which would invalidate or confuse all the higher IDs), but it's not too bad.

I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array?

Keeping an ID rather than just a pointer just seems like it's adding an extra unnecessary layer of indirection, since getting accessing simple information has to be done through the singleton rather than just simply using a method on the object itself. Not to mention with pointers you wouldn't have to worry about any of the ID management you have to deal with at the moment. Plus using smart pointers is generally more reliable than manually updating the reference counts.

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

AntiPseudonym posted:

I think the best solution to your problem is to not actually allow copies of the object at all, as it doesn't make sense to have more than one object in memory for the same resource. You're much better off just using smart pointers or having an object manager that controls when objects are deleted (Say on a level change).


I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array?

Keeping an ID rather than just a pointer just seems like it's adding an extra unnecessary layer of indirection, since getting accessing simple information has to be done through the singleton rather than just simply using a method on the object itself. Not to mention with pointers you wouldn't have to worry about any of the ID management you have to deal with at the moment. Plus using smart pointers is generally more reliable than manually updating the reference counts.

Well, if you have a bunch of actual pure pointers to object X, you can never replace object X. Imagine if you want to dynamically load/unload the underlying resource, or replace it with a different level of detail model, for example. If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. If all of your parent-objects just say "give me object 8 out of that array" (basically) then you can load/unload or replace object 8 any time.

There are times that direct pointers can be better, but they're not obviously better for all use cases.

AntiPseudonym
Apr 1, 2007
I EAT BABIES

:dukedog:

Unormal posted:

Well, if you have a bunch of actual pure pointers to object X, you can never replace object X. Imagine if you want to dynamically load/unload the underlying resource, or replace it with a different level of detail model, for example. If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object. If all of your parent-objects just say "give me object 8 out of that array" (basically) then you can load/unload or replace object 8 any time.

There are times that direct pointers can be better, but they're not obviously better for all use cases.

We're talking about a pointer to a wrapper, though, not a direct pointer to the D3D object. You could just switch over the object within the wrapper and everything else will start using the new resource, same as replacing the object in the array.

AntiPseudonym fucked around with this message at 17:32 on Sep 10, 2011

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

AntiPseudonym posted:

We're talking about a pointer to a wrapper, though, not a direct pointer to the D3DTexture. You could just replace the actual pointer within the wrapper and everything else will start using it, same as replacing the object in the array.

Ah, I see what you're suggesting.

I think that approach would work as well, I just think it's a less conventional approach. Frankly it'd probably be a better way of thinking for vastly multi-threaded environments.

Unormal fucked around with this message at 17:36 on Sep 10, 2011

FlyingDodo
Jan 22, 2005
Not Extinct
I'm starting to think a way to deal with it is to disable copying. I found this blog post: http://jrdodds.blogs.com/blog/2004/04/disallowing_cop.html

So just have private copy constructor and operator= without implementing it.

Good idea, bad idea? It seems like the easiest way to deal with it because it means I can keep my code as-is and have the opengl object deleted in the destructor. This wouldn't even compile if there is an attempt to copy an object.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

AntiPseudonym posted:

I've seen a few people do it this way, but one thing I've never quite grasped: Why not just use a pointer to a wrapper class rather than an index into an array?
You'd still need to keep a list of all your wrapper pointers wouldn't you? For, eg. when DirectX loses all the resources and you have to reinitialize them all, or for when you want to check whether a particular resource by name is already loaded.

Not saying the wrapper wouldn't work, you could have wrappers in some sort of std::list or std::map and it'd make it easier to manage removal on unload than an array. So the real reason is probably "because I didn't think it through fully". But I don't think wrappers would be that much of an advantage, because either way you still end up with a singleton list.

Actually, the array system does have one other advantage for me, but it's not a generalizable thing - my objects are indexed by fileindex and objectindex, which combined are a 32-bit value; it makes it easy for the file to contain references to other objects in the same file, or objects from a specific other file. You can't link Mesh_X to (pointer) inside the data file, but you can link it to an index. Again, not saying you couldn't work around this and convert indices to pointers while loading, but it's a convenience of arrays that isn't present with pointers, that you can predict an index for a thing that isn't yet loaded.

AntiPseudonym
Apr 1, 2007
I EAT BABIES

:dukedog:

roomforthetuna posted:

You'd still need to keep a list of all your wrapper pointers wouldn't you? For, eg. when DirectX loses all the resources and you have to reinitialize them all, or for when you want to check whether a particular resource by name is already loaded.

There are definitely good reasons to have a big list o' pointers hanging around for sure.

quote:

But I don't think wrappers would be that much of an advantage, because either way you still end up with a singleton list.

It's more of a convenience/bulletproofing thing. A container of pointers has a lot more freedom to chop and change than an array of IDs does without all of the collapsing/reshuffling you have to do currently. It also just seems cleaner to have all the relevant information about an object to actually be immediately accessible to the object itself rather than constantly going through an intermediary.

Also it becomes a LOT easier to ditch the singleton if the objects are in charge of themselves. The only things that should know about your Big-rear end Resource List should be anything that's going to going to be adding new resources or doing operations on the whole list (Which honestly shouldn't be that much at all, maybe just your renderer and mesh loader), so you can just pass in a pointer to your ResourceManager to whatever methods require it.

AntiPseudonym fucked around with this message at 19:02 on Sep 10, 2011

passionate dongs
May 23, 2001

Snitchin' is Bitchin'
I have a question about OpenGL / VBOs:

I have code that creates a geodesic sphere by subdivision. It works fine, although there are tons of duplicated vertices (something like 6000 unique, 60000 with duplicates). Right now my dumb way of figuring out if a new index should be created is comparing the new vertex to be added to every vertex previously in the array, and if it finds the same vertex it appends the index to the index list. Otherwise, it adds the vertex to the vertex list and creates a new index.

Is this stupid/is there something fundamentally wrong with how I'm creating the sphere? It's frustrating because a lot of OpenGL examples are in immediate mode / inefficient to be explicit / use antiquated things like display lists.

edit: Is there somewhere to look for OpenGL best practices? Is this something the red book would have?

TasteMyHouse
Dec 21, 2006

Unormal posted:

If you have pointers, you'd have to go through every parent-object that contains the pointer and update it to the new object.

or use placement new (don't do this)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

passionate dongs posted:

Is this stupid/is there something fundamentally wrong with how I'm creating the sphere?
Use a hash table for cache lookups and it will go about 3000 times faster.

Spite
Jul 27, 2001

Small chance of that...

passionate dongs posted:

edit: Is there somewhere to look for OpenGL best practices? Is this something the red book would have?

There's really only the most basic of "best practices" out there.
You can look at:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide and the more recent WWDC talks on it.

Also keep in mind that it varies (sometimes quite heavily) by GPU vendor and platform.

FlyingDodo
Jan 22, 2005
Not Extinct
I recently upgraded my computer, and of relevance is the graphics card. I had a radeon 9700 and now have a radeon hd 6970. I have noticed that it seems like tjunctions are no longer an issue. On the old card if I rendered anything in opengl which had tjunctions there would be very visible and hideous glitches. So I added in code to add in additional vertices where required to fix them. With my new graphics card they don't seems to appear at all. Is is still necessary to make sure a polygon mesh has no tjunctions, along with things like vertex welding?

haveblue
Aug 15, 2005



Toilet Rascal
Yes, you're just lucky that it doesn't look as bad in your specific case. T-junctions are always a bad thing.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
Is there some way in shaders to interpret the pixel that's 'behind' the pixel you're rendering?

Specifically, a pixel shader can, eg. say "this pixel has 50% alpha", which then blends the color appropriately with the color of the pixel behind. Is there some way to say, for example, "I want to blend with the inverse of the pixel behind"?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Blend shaders don't exist yet, the "best" way to do custom blends is MRT render to a texture and the screen at the same time, then use the texture in the pixel shader of later operations.

If you want to invert the color of the destination then you might be able to use ONE_MINUS_DST_COLOR

CCrew
Nov 5, 2007

So, I'm working on a simple program in OpenGL, with a pretty simple shader. My vertex shader has the following code:
code:
#version 330

layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;

smooth out vec4 theColor;

void main()
{
	gl_Position = position;
	theColor = color;
}
But, when I try to compile, I get the error:
code:
Implicit Version number 110 not supported by GL3 forward compatible context
I've been looking around for a solution, but haven't had any luck. Anyone know what's going on? Any help would be appreciated!

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
#version 130, not 330

CCrew
Nov 5, 2007

Thanks, but I just figured it out. Turns out it was much dumber...the text files I was editing in vs2010 were separate from what the program was physically referencing. One of those, "Oh, I'm dumb" moments. Thanks for the quick response though!

ShinAli
May 2, 2003

The Kid better watch his step.
Not really a programming question, but I'd like to know what you know-more-than-I-dos think.

I remember awhile back some ATi guy saying something like how much it'd be better if developers were able to directly program to the GPU rather than depending on the graphics company's implementation of a library, with a chorus of major engine developers saying that would be pretty awesome, then the ATi guy immediately back peddling and going "HEY WHAT I MEANT TO SAY WAS OH BOY ISN'T DIRECTX JUST GREAT?"

Has there been much consideration about going towards this route, given that graphics companies agree to some common instruction set? Would there be too many drawbacks? Given that everyone agrees on a instruction set, we could still have all those libraries except the development of them could be more transparent; more than that, developers don't have to be so restricted to them.

Is this just a case where its what everyone wants, its just nVidia and AMD/ATi wouldn't go along with it? Or is this thinking flawed?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
It's flawed because developers don't want it either. See CTM on the ATI side for a nice example, it's really specific to the actual card you're doing anything on. CUDA was more lenient, but developers are annoyed enough at having to vary renderer behavior by hardware when they DO use the same API.

Making card-specific behavior works great when your audience uses one card. That isn't the PC market. Making card-specific behavior there means you're sinking development resources into a fraction of your audience.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

Making card-specific behavior works great when your audience uses one card. That isn't the PC market. Making card-specific behavior there means you're sinking development resources into a fraction of your audience.

Or, if you're Epic and are willing to sink a lot of effort into writing low-level engine code so that you can sell the engine at a markup to a gaggle of independent developers.

But no, in general it's not a good idea because GPUs change hardware way faster than CPUs do right now, and the major vendors are very different under the hood in a lot of important ways, so it would be hard to come together with a fully shared ISA.

haveblue
Aug 15, 2005



Toilet Rascal

OneEightHundred posted:

Making card-specific behavior there means you're sinking development resources into a fraction of your audience.

Or it means you're targeting a console, where doing this is much more practical and encouraged and one of the big reasons a console is viable for much longer than a PC with the equivalent specs on paper.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

Or, if you're Epic and are willing to sink a lot of effort into writing low-level engine code so that you can sell the engine at a markup to a gaggle of independent developers.
Epic would rather spend their time writing code with tangible benefits to as many users as possible too. The fact that it affects more users does not change the fact that, relative to other features they could be implementing, it isn't a very productive use of time.

haveblue posted:

Or it means you're targeting a console, where doing this is much more practical and encouraged and one of the big reasons a console is viable for much longer than a PC with the equivalent specs on paper.
That's what I said though.

There are other reasons too, a large part of it is that developers aren't terrifically interested in scalability at the moment.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

Epic would rather spend their time writing code with tangible benefits to as many users as possible too. The fact that it affects more users does not change the fact that, relative to other features they could be implementing, it isn't a very productive use of time.

My point was that the people you hear advocating it are all developers who have middleware to sell -- it would be a HUGE investment of time and effort to implement a flexible, multi-platform low-level engine, but could also provide them with performance that would give them a competitive advantage over developers with less resources to implement and maintain such a codebase. Remember back when Mark Rein was out stomping for Larrabee?

But that's neither here nor there, because this will never happen. CUDA/OpenCL with some limited access to fixed-function units is probably as close as you'll ever get.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

Remember back when Mark Rein was out stomping for Larrabee?
Well, that was probably a relay from Tim Sweeney, who has been attempting to sound the death knell of polygon scanline rasterizers for a while. I don't think they're being disingenuous, I just think that Tim Sweeney does not have a very good grasp of the pace and direction that hardware will head beyond current announcements, evidenced by Larrabee's failure and his early "we will have 24 core processors by the time GOW2 is released!" predictions.

quote:

CUDA/OpenCL with some limited access to fixed-function units is probably as close as you'll ever get.
I don't think this is true, DICE has already said that DirectCompute is actually faster for doing lighting with deferred shading due to culling advantages, and that's with current-generation technology.

All of this is very different from saying that anyone wants a direct hardware tap though. There's a reason that DirectCompute and OpenCL exist.

OneEightHundred fucked around with this message at 17:09 on Oct 27, 2011

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply