Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
stramit
Dec 9, 2004
Ask me about making games instead of gains.
I found some awesome OpenCL tutorials here:
http://macresearch.org/opencl

Subscribe in itunes and they have video presentations with slides and everything... reminds me of being back at uni. Oh how times change!

If you know anything about the GPU I would skip the first one and start on the second. Things don't really start to get interesting till the 3'rd though, but the second goes over how OpenCL works. (first lectures are just explaining what GPGPU is ect).

I'm going to do some particle simulations or a raytracer in GPGPU when my new mac arrives I think.

Yay :D

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...
If you have a membership, I highly recommend checking out the WWDC OpenCL videos.

As for the guy that wrote those - he worked with the OpenCL/GL team last year for a WWDC presentation. That's the Molecule demo if any of you went to the 09 conference. All that info is the result of that work.

Zerf
Dec 17, 2004

I miss you, sandman

Contero posted:

I can get some recommendations on papers or tutorials for rendering fire effects? Any kind of fire.

Do you want it to be useful or just play around? If I had time some more time to just experiment I'd definitely look into this: http://users.skynet.be/fquake/

FlyingDodo
Jan 22, 2005
Not Extinct
Am I understanding opengl's texture coordinates and byte ordering correctly?



I hope I have it right, or is it different? (eg: byte 0 is actually bottom left, or texture coordinate (0,0) is top left etc).

Spite
Jul 27, 2001

Small chance of that...
I hate to be the guy that's all like "READ THE SPEC" but it's pretty well defined:

http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
From Page 186:
The image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice

So you've got the data backwards. It's talking about glTexImage3D here because the spec is a mess, but a 2d texture is basically a 3d texture with no depth.

FlyingDodo
Jan 22, 2005
Not Extinct
Thank you, now I know if my textures are loading correctly. I did actually google to try and find out, but all that I could find related to RGB vs BGR and telling me that coordinate (0,0) is the bottom left but nothing specific about how the bytes are arranged.

PalmTreeFun
Apr 25, 2010

*toot*
Alright, so I'd like to start programming a game in OpenGL. I'm very familiar with C++, and I know how to program a game, but I'm not sure what windowing system would be optimal for this purpose. Should I use GLUT, glfw, or something else?

Also, as far as image loading/writing goes, do I need to check for system endian-ness to manipulate image data? If I make a game cross-platform, I don't want it to start making GBS threads itself, loading and modifying images in the wrong order because some goofy OS reads and writes data backwards.

haveblue
Aug 15, 2005



Toilet Rascal

PalmTreeFun posted:

Also, as far as image loading/writing goes, do I need to check for system endian-ness to manipulate image data? If I make a game cross-platform, I don't want it to start making GBS threads itself, loading and modifying images in the wrong order because some goofy OS reads and writes data backwards.

This isn't an issue, all the major image formats and their loaders/exporters will take care of it.

Mata
Dec 23, 2003

PalmTreeFun posted:

Alright, so I'd like to start programming a game in OpenGL. I'm very familiar with C++, and I know how to program a game, but I'm not sure what windowing system would be optimal for this purpose. Should I use GLUT, glfw, or something else?

Also, as far as image loading/writing goes, do I need to check for system endian-ness to manipulate image data? If I make a game cross-platform, I don't want it to start making GBS threads itself, loading and modifying images in the wrong order because some goofy OS reads and writes data backwards.

I recommend GLFW, it's really smooth and has some nice things built in like full screen anti aliasing and tga loading, both being a pain in the rear end to do yourself.

Spite
Jul 27, 2001

Small chance of that...
I'm not familiar with glfw, but I'd stay away from GLUT. SDL also works.
You're probably better off handling context creation/destruction yourself in general.

Most image formats dictate endianness, so you shouldn't have to worry about that. OSX has the ImageIO library that will handle most formats. Model loading and animation is the hard part.

And the usual caveats:
Use vertex buffer objects and frame buffer objects
Batch your state together and change state as little as possible
That said, don't make things overcomplicated

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?
Hopefully this is a quick DX10 question: I want to be able to retrieve and restore device state information, including whatever index and vertex buffers the device is currently using. Doing that itself is no problem, but I'm concerned about what happens with IAGetVertexBuffers or IAGetIndexBuffers if the device isn't currently hooked up to any buffers. Do those methods return NULL pointers? I don't really picture this happening with how this bit of code is intended to be used, but for error handling purposes I'd like to be able to skip restoring device settings if there's nothing to restore.

Edit- I should think before I post. Figured it out on my own by just stepping through the code in the debugger while no vertex and index buffers were hooked up. They do indeed return NULL pointers.

Paradoxish fucked around with this message at 18:54 on Jul 7, 2010

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
I seem to be retarded because I'm having trouble converting between horizontal and vertical fov.

is:
code:
2 * atan(tan(hfov / 2) * aspect)
wrong?

I want vertical from horizontal.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Looks correct. Code I'm using is:

code:
_halfVFov = terAngleRadians(atan2(1.0f, aspectRatio / _halfHFov.Tan()));

Mustach
Mar 2, 2003

In this long line, there's been some real strange genes. You've got 'em all, with some extras thrown in.
Is there a good, cross-platform, OpenGL, font-loading and text-rendering library? FTGL is really simple to use but it's implemented in immediate mode.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Most games do font rendering by having FreeType rasterize the font at a desired point size and then render the glyphs like any other 2D imagery.

i.e. use FT_Glyph_To_Bitmap

Mustach
Mar 2, 2003

In this long line, there's been some real strange genes. You've got 'em all, with some extras thrown in.
I had a feeling it would be that way. I was hoping to avoid learning FreeType, but now that I have a starting point it doesn't seem so bad. Thanks!

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
nevermind, figured out.

slovach fucked around with this message at 17:31 on Jul 12, 2010

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mustach posted:

I was hoping to avoid learning FreeType
It's easy as gently caress.

FT_Init_FreeType to init.
FT_New_Face (or FT_New_Memory_Face) to load a font file into a typeface.
FT_Load_Glyph to load the glyph from a typeface.
FT_Get_Glyph to retrieve the glyph object for it.
FT_Glyph_To_Bitmap to render it to a bitmap.

FT_Bitmap_Done to release the bitmap.
FT_Done_Glyph to release the glyph.
FT_Done_Face to release the face.
FT_Done_FreeType to release everything.

That's it.

e: Oh yeah, and snapping glyph contours to pixel edges and integer glyph dimensions are part of the font rendering process, it was designed that way from the ground up and it's why fonts look crisp at low resolution.

In other words, process the font separately for each size you intend to render at, especially for small sizes. Do not process the font once at a large size and scale it down, it will look like blurry poo poo if you do that. Unless you're going to attach it to objects in 3D space, in which case you SHOULD do that.

OneEightHundred fucked around with this message at 20:25 on Jul 12, 2010

Mustach
Mar 2, 2003

In this long line, there's been some real strange genes. You've got 'em all, with some extras thrown in.
Dude, you went the extra mile on that. You're awesome, thanks!

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Good news for OpenGL types who are fed up with a lack of books on modern techniques: It looks like the Fifth Edition of the Superbible is coming out soon, and the code samples can be checked out from the author's SVN repo. I grabbed the contents and so far it looks pretty promising; looks like they've discarded immediate mode altogether (thank Christ somebody told the author that it's no longer 1998), and shaders get introduced a lot earlier (IIRC, in the previous edition they were almost afterthoughts) :woop:

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

Dijkstracula posted:

Good news for OpenGL types who are fed up with a lack of books on modern techniques: It looks like the Fifth Edition of the Superbible is coming out soon, and the code samples can be checked out from the author's SVN repo. I grabbed the contents and so far it looks pretty promising; looks like they've discarded immediate mode altogether (thank Christ somebody told the author that it's no longer 1998), and shaders get introduced a lot earlier (IIRC, in the previous edition they were almost afterthoughts) :woop:

Is there any hint of a release date? I've been wanting to get into some OpenGL stuff, but haven't had the time and I tink this might push me into learning more about it.

Spite
Jul 27, 2001

Small chance of that...
The thing about the Superbible is that there are 3 authors, with different parts by each. The original (super old) book was written by Richard Wright. Then Benj Lipchak wrote the shader/more recent stuff. The forth edition (whenever it became the blue book) has another guy, but I don't know him. It's not a bad place to start, but OpenGL is in an odd place right now and a new book would hopefully cover 4.0 - except that no one has actually written a 4.0 app, so who knows!

For learning, you're better off at tutorials or asking here.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Spite posted:

The thing about the Superbible is that there are 3 authors, with different parts by each. The original (super old) book was written by Richard Wright. Then Benj Lipchak wrote the shader/more recent stuff. The forth edition (whenever it became the blue book) has another guy, but I don't know him. It's not a bad place to start, but OpenGL is in an odd place right now and a new book would hopefully cover 4.0 - except that no one has actually written a 4.0 app, so who knows!

For learning, you're better off at tutorials or asking here.
Yeah, when I tried migrating to OpenGL I tried the fourth edition, and found it profoundly disappointing, coming from the "shaders are supposed to be ubiquitous and aren't actually a difficult thing to start people off on" mentality of DirectX. But, as I say, going from the code samples, there might actually be some meat on the bones of the new edition (which, to answer Sweeper's question, is apparently currently available on Amazon, but I haven't seen in in any of the e-book repositories that my University has access to)

I can't tell you the number of hours I've wasted trying to find OpenGL material on par with the ShaderX series or MSDN documentation. Hopefully the fifth edition will do the job. I think Kronos actually needs to deprecate glBegin/glEnd to force Nehe into irrelevance or something. :(

Speaking of tutorials, I compulsively check the Durian blog when I get up each morning and there's a new article out :woop: It's looking pretty drat rad.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Dijkstracula posted:

I think Kronos actually needs to deprecate glBegin/glEnd to force Nehe into irrelevance or something. :(
Those have already been purged from the core API as of 3.1

haveblue
Aug 15, 2005



Toilet Rascal

OneEightHundred posted:

Those have already been purged from the core API as of 3.1

And they never existed at all in OpenGL ES.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

Dijkstracula posted:

(which, to answer Sweeper's question, is apparently currently available on Amazon, but I haven't seen in in any of the e-book repositories that my University has access to)

It says it's out July 30th on Amazon now, I didn't see that the last time I checked!

http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&s=books&qid=1279304444&sr=8-1

Spite
Jul 27, 2001

Small chance of that...
The problem is that there's a metric assload of code out there that still uses Begin/End and/or display lists. And there's still a bunch of people that have to maintain that code. Unfortunately most games use D3D these days, so there isn't as much pressure for good OGL tutorials. The amount of infighting in the ARB doesn't help the state of the API either.

I wish there were a few good really impressive OGL3.2+ games out there, but there aren't. ES is the best bet since it still has the majority, unless they screw up that spec.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Spite posted:

The problem is that there's a metric assload of code out there that still uses Begin/End and/or display lists. And there's still a bunch of people that have to maintain that code. Unfortunately most games use D3D these days, so there isn't as much pressure for good OGL tutorials.
It's more of a feedback effect, as long as begin/end are the easiest way to get poo poo out to a framebuffer, that's what the tutorials will show.

OpenGL MIGHT be on the verge of viability again once they finally strip some of the remaining bullshit out (i.e. sampler state still being part of texture state, mandatory shader linking) in the next version, but they're still playing catch-up and are still missing features for absolutely no reason (i.e. why is it still not possible to save a compiled shader to disk and reload it, even if it's vendor-specific bytecode?)

OneEightHundred fucked around with this message at 00:27 on Jul 17, 2010

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

(i.e. why is it still not possible to save a compiled shader to disk and reload it, even if it's vendor-specific bytecode?)

That's really, really hard and annoying to do. The bytecode would have to be so generic as to essentially be the same as a shader source file or arb program string. If they made vendor-specific bytecodes it would have to be totally hardware agnostic (you don't want to have to ship different bytecodes for r5xx, r6xx, r7xx, r8xx if you are ATI) and also you'd have to support it forever in the future.

It's not going to happen unless OpenGL defines a format - but that's kind of pointless. Storing bytecode doesn't really help you that much - the GL stack and the Driver stack still have to compile it into machine code and do their optimizations. You don't gain much time from hoisting out the parsing and if you are compiling shaders constantly during rendering you are doing something very wrong.

The only reason to do this I see is obfuscation, but that doesn't gain you much since the people you are trying to hide the shaders from can just read the spec and reverse engineer it anyway.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

But HLSL does exactly this, doesn't it? Or, is it some sort of intermediary representation that gets JITted at runtime? I guess I never thought about it, but I always just assumed that, say, all cards that support a certain shader model would expose the same GPU opcodes.

Hammertime
May 21, 2003
stop
Sorry if this has been already covered. I've tried to read most of the thread.

I'm writing a game in my free time, but so far I've been largely self taught from tutorials. While I haven't been completely lead astray since I know most of the tutorials are garbage, but I have no clue at all what "good" practice is let alone how to make a game that will let me use the hardware efficiently.

So far I've largely learnt:
- Immediate mode is a joke
- Display lists are archaic
- Vertex Buffer Objects are good
- Fixed function pipeline is archaic
- Everything has to be done through shaders
- OpenGL 3.0 changes things heavily moving forward

My Nvidia GTX280 apparently only has OpenGL 2.1. If I'm trying to target relatively modern hardware (but not bleeding edge), is relying on OpenGL 3.0 even doable?

I've read all of the "An intro to modern OpenGL" tutorials, but it's not progressing at the rate I need.

I'm going through the superbible 5 code ... what's the best way for me to gain a thorough modern OpenGL education?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Spite posted:

That's really, really hard and annoying to do. The bytecode would have to be so generic as to essentially be the same as a shader source file or arb program string.
That isn't what I'm asking for. I'm saying that I should be able to have the driver give me the vendor-specific and probably hardware-specific code compiled from the GLSL so I only have to compile it once and then save it to disk so I never have to compile it again.

Even that's a pretty large concession though: I do think it would probably still be better if they used semi-agnostic bytecode similar to D3D because it gives the driver writers fewer places to screw up. Being able to blue-screen my computer because the ARB decided to trust ATI with writing a compiler sucks rear end.

shodanjr_gr
Nov 20, 2007
So I am working on getting Ogre3D to work inside a CAVE environment. I've written some code for calculating off-axis projection matrices for an arbitrary viewport (defined by the bottom left, top left and bottom right corners) and an arbitrary eye position.

If I define a viewport with it's center at (0,0,-1), extending 1 units to each side (so that the top frusturm edge is at (0,1,-1), the bottom one at (0,-1,-1), etc) and I place my eye at (0,0,0) in this "frustum space", I get this sort of result:


Click here for the full 797x750 image.


While I am expecting to get this (which is what gets produced if I just create a viewport with an aspect ratio of 1.0):


Click here for the full 800x747 image.


I've actually tried 2 different ways to calculate the off-axis projection matrices (1 on my own, 1 ripped off from Syzygy, a VR Library) and I get the same result. My intuition is that the custom projection matrix ends up having a far larger FOV than the non-custom one...

Any ideas?

Zerf
Dec 17, 2004

I miss you, sandman

Hammertime posted:

what's the best way for me to gain a thorough modern OpenGL education?

I find that the easiest way is just to skip the specifics(i.e. what API you're using) and just read what the IHVs gives presentations about, be it DirectX or OpenGL. Working with graphics, you're going to be bound by hardware anyways, so what works good in DirectX is probably going to work good in OpenGL too.

What I would do is just skim through Nvidia/AMD dev websites and look at performance articles/presentations such as http://developer.download.nvidia.com/presentations/2008/GDC/GDC08-D3DDay-Performance.pdf (this is a bit old, I just took something as an example)

You'll find lots of information on http://developer.amd.com/, http://developer.nvidia.com/ and maybe even http://software.intel.com/en-us/visual-computing/. And when in doubt, use a profiler!

Zerf fucked around with this message at 20:29 on Jul 19, 2010

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

That isn't what I'm asking for. I'm saying that I should be able to have the driver give me the vendor-specific and probably hardware-specific code compiled from the GLSL so I only have to compile it once and then save it to disk so I never have to compile it again.

Even that's a pretty large concession though: I do think it would probably still be better if they used semi-agnostic bytecode similar to D3D because it gives the driver writers fewer places to screw up. Being able to blue-screen my computer because the ARB decided to trust ATI with writing a compiler sucks rear end.

No vendor is going to do that though - that's too much information for them to be comfortable letting the user keep around. And then you'd have to ship compiled shaders for every card model - r5xx, r6xx, etc, etc. It's not really worth it.

And the agnostic bytecode D3D ships is still compiled by each driver - every vendor still has to convert it into machine code and run optimizations, or maybe even replace chunks altogether. It may look very different from the bytecode after all this. It only saves the parsing step, which is really not saving much time at all.

haveblue
Aug 15, 2005



Toilet Rascal
For what it's worth, saving shader executables to disk is a trick that's often done on consoles.

pseudorandom name
May 6, 2007

Spite posted:

And the agnostic bytecode D3D ships is still compiled by each driver - every vendor still has to convert it into machine code and run optimizations, or maybe even replace chunks altogether. It may look very different from the bytecode after all this. It only saves the parsing step, which is really not saving much time at all.

It does save us from every vendor writing their own incompatible GLSL compiler, though.

haveblue posted:

For what it's worth, saving shader executables to disk is a trick that's often done on consoles.

There's a grand total of three console GPUs on the market right now.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Spite posted:

No vendor is going to do that though - that's too much information for them to be comfortable letting the user keep around.
Oh bullshit, CTM basically already works off of that exact concept. ATI even has tools that let you look at the compiled output of GLSL shaders.

I'm not saying you'd have to SHIP it with each one anyway, I'm saying that the app would only have to compile them once ever and cache them to disk.

Compile-then-save isn't a revolutionary feature either, Far Cry and a few Battlefield entries for example both leave shaders uncompiled until the first time they're used.

And yes, the incompatible shader compilers thing is a complete pain in the rear end too, especially with int/float conversions which ATI will throw errors over while NVIDIA completely ignores them.

OneEightHundred fucked around with this message at 23:16 on Jul 19, 2010

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

Oh bullshit, CTM basically already works off of that exact concept. ATI even has tools that let you look at the compiled output of GLSL shaders.

I'm not saying you'd have to SHIP it with each one anyway, I'm saying that the app would only have to compile them once ever and cache them to disk.

Compile-then-save isn't a revolutionary feature either, Far Cry and a few Battlefield entries for example both leave shaders uncompiled until the first time they're used.

And yes, the incompatible shader compilers thing is a complete pain in the rear end too, especially with int/float conversions which ATI will throw errors over while NVIDIA completely ignores them.

Right, but it's not _really_ what's being run on the card. That's just a representation of it. You can't just compile once, because it doesn't work that way. Every driver on every card will recompile shaders based on state to bake stuff in and do other optimizations. You'd need a huge number of different binaries to cover all possibilities. Turned on sRGB? Your fragment shader will be recompiled. Using texture borders? Recompile again, etc.

It would be nice to have a specified GLSL parser that doesn't suck, definitely. But there's a whole lot more work that goes into compilation than just that. Hoisting the initial parsing step is really saving just a small part of the overall time and work for "compilation."

Adbot
ADBOT LOVES YOU

Luminous
May 19, 2004

Girls
Games
Gains

Spite posted:

Right, but it's not _really_ what's being run on the card. That's just a representation of it. You can't just compile once, because it doesn't work that way. Every driver on every card will recompile shaders based on state to bake stuff in and do other optimizations. You'd need a huge number of different binaries to cover all possibilities. Turned on sRGB? Your fragment shader will be recompiled. Using texture borders? Recompile again, etc.

It would be nice to have a specified GLSL parser that doesn't suck, definitely. But there's a whole lot more work that goes into compilation than just that. Hoisting the initial parsing step is really saving just a small part of the overall time and work for "compilation."

Do you really not understand what OneEightHundred is saying? Or are you just trolling him? You should have just stopped at your very first sentence.

If options change that would cause a need for recompilation, then it would be recompiled. However, if not, then keep using the stored built version. I'm not sure why caching appears to be an alien concept to you. Maybe you're just being all goony semantic about it, or something.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply