Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Spite
Jul 27, 2001

Small chance of that...
I'm not a display list whiz, but your two versions don't really do the same thing. I'm not sure if display lists really have a meaning outside of glBegin. By which I mean stuff like glVertex doesn't work outside of a begin/end pair. I'd use VBOs though, as they tend to be more efficient on modern hardware. (as they are the path the driver developers care about)

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...
Right, but how are you creating the lists? glVertex doesn't really mean anything outside of a begin/end pair - so the driver may be getting confused when you compile your display lists, or try to draw them. First thing that comes to mind for me, anyway.

Spite
Jul 27, 2001

Small chance of that...
I think that's just worded badly. glVertex/Color/Normal, etc don't really mean anything by themselves - the display list will be optimized by the implementation, but in order to do that it needs to know what the data means. By not including begin/end, you aren't including that info so it can't really do anything.

Most implementations let you turn off multithreading, try turning it off and seeing what happens.

Spite
Jul 27, 2001

Small chance of that...
On windows, it's usually in the driver options. I know nvidia has it; not sure about ATI. On Mac, it's set programmatically (though that may be difficult if you're using perl)

Spite
Jul 27, 2001

Small chance of that...
Everything will be decomposed to triangles eventually, so you should start with them.

Constantly generating new display lists won't speed you up unless you re-use them a lot. You're almost certainly better off using a vertex buffer object and updating it.

Spite
Jul 27, 2001

Small chance of that...

shodanjr_gr posted:

Tried that, same thing (maybe sliiiiiiiiiiiiightly faster).


I also tried uncommenting all calls that are not related to matrix stuff, or to geometry production, but still the peformance is equally crappy...I have also reinstalled the GMA drivers by intel...


While I understand (and love) FBOs, I just want to show how shadowmapping works in principle and dont want to overcomplicate the demo. Plus I cant be sure if all the lab hardware supports them...

How are you implementing the shadow mapping? What's your state look like? Are you copying the depth texture to the CPU memory and back to the GPU? Are you letting OpenGL generate the texture coordinates for you with regards to the shadow mapping? Pbuffers suck horribly and the intel chips are crap with FBOs (and in general). What happens if you just draw your scene twice, does performance still suck? Have you tried a profiler?

Spite
Jul 27, 2001

Small chance of that...

Scarboy posted:

I have a camera in OpenGL using the gluLookAt function that is working correctly. The camera rotates around a fixed point at the center of the screen. Is there any way i can lower the center point of the camera on the screen/viewport/window?

I don't want to put the center of the camera a few units further on the z-axis because then my object moves when the camera rotates around it. I want the object to always be in the same place, have the camera rotating around it, and to not be at the center of the screen (somewhere in the lower 1/3 of the screen).

Any way to do this?

You can just translate up and down to make it seem like the camera is higher or lower.

I think you're misinterpreting how the math works out. In the end, after your transformations are applied, the camera is at 0,0,0 looking down the -z axis and everything else has been transformed relative to that. Try thinking about it as if the world is moving around the camera, instead of the camera moving through the world.

EDIT: Alternately, you could adjust your viewport - but that might be weird.

Spite fucked around with this message at 02:55 on Feb 25, 2009

Spite
Jul 27, 2001

Small chance of that...

PnP Bios posted:

I imagine most of the changes in 3.0 have to do with GLSL rather than the core API. ex, implementation of geometry shaders.

The only real change left to the core API is to remove the fixed pipeline functionality. That's never going to happen though, since the CAD developers would poo poo bricks.

Well, it's very incremental, but 3.0 _should_ have been like ES2.0 and removed all that crap. Especially since half of OpenGL isn't ever used, should never be used, and your computer's implementation doesn't support it anyway. As for 3.0 usage there really isn't a reason to use it yet since almost everything interesting can be done with an extension and no one wants to learn a new API until they have too.

But it will never happen because, as you said, the legacy developers would go crazy. There are a lot (and I mean a lot) of really baaaaad OpenGL apps out there.

Though I have to say: it is a royal pain in the rear end to get something up and running quickly in ES2.0.

Spite
Jul 27, 2001

Small chance of that...

shodanjr_gr posted:

Since we are talking about point sprites, is it possible to get to the point-sprite generated geometry inside a geometry shader?

If you are using geometry shaders (and really, they kind of suck since they aren't very performant), why not extrude a single vertex into a quad yourself?

Spite
Jul 27, 2001

Small chance of that...

Dijkstracula posted:

So, I'm continuing on my search for a proper non-fixed-pipeline introduction to OpenGL 3.x, and so far I'm still inexplicably coming up short, with the exception of the OpenGL ES 2.0 programming guide. Is ES close enough to vanilla OpenGL that I can more or less s/E(GL\w+)/GL\1/ and swap a few header files in and out?

Also, I'm wondering precisely why everyone is saying to stay away from the Nehe tutorials. I've been glancing at them this morning and they actually seem better than the OpenGL Superbible, which I took out of the library a few weeks ago (which in my opinion has downright terrible code)? edit: ah, okay, well, I've just hit a point where it would have been obvious to use glPushMatrix() and glPopMatrix() but blows everything away with the identity matrix and recomputes everything...

Or, is this the Forces of Destiny trying to tell me that OpenGL is all but dead on non-embedded devices and I should boot into Windows and do DX10 stuff instead?

ES2 is nothing like vanilla OpenGL, but will be close to OpenGL 3.2. OpenGL 3.0 doesn't exist. IT DOES NOT EXIST I TELL YOU. ahem. In ES2 you have to do almost everything by hand, to the point where you have to pass your matrices to your shaders yourself. There is no fixed function at all. This is a pain when you are starting out, since you have to write a ton of code just to get a quad on screen, but it is waaaay better in the long run since it removes stuff that really shouldn't still be in the API.

Everyone says to stay away from Nehe because they are out of date and naive about how they do things. For example, in a real application you should never, ever, ever use Display Lists or call glBegin. All geometry should be put into a VBO and you should draw with that. It's ok for a very simple tutorial, but it really encourages bad habits, I feel.

As for my previous comment on Geometry shaders, I mean that every implementation that's currently available sucks - especially under OpenGL. They're a great idea that has been very underwhelming to me thus far.

Spite
Jul 27, 2001

Small chance of that...

Jo posted:

How is the OpenGL experience in Java? Is it fairly similar to C++, or does the extra layer of indirection with memory management gently caress everything over?

I think it's absolutely horrible, even if you find a binding that doesn't create some over-engineered OO paradigm.

Spite
Jul 27, 2001

Small chance of that...

Strumpy posted:

You are wrong. LWJGL is a great binding. Uses native buffers to imitate float pointers and the like and the API is for the most part a direct binding. There is some Display classes and stuff to get it started that are not GL specific but all the openGL code will be.

http://lwjgl.org/

Yes, but you are still using Java, which means you are adding a crapload of overhead to a type of programming that should be as efficient as possible. Crossing the boundary from the JVM into the GL library is a pretty expensive thing to do for every GL call.

Spite
Jul 27, 2001

Small chance of that...

Unparagoned posted:

I'm using opengl es for the iphone. I have a very simple model that exists in 3d space, the problem is that the visible parts depend on the order they are drawn rather than their position in 3d space. So say the person has a shield in their left hand, then from the left view, you see the shield and can't see the body since it's being blocked. Now from the right side, the body should still be seen, but it's not.

I figured out that this is since the shield is being added to the array last, if I make it first I get the opposite problem.


How do I make it act like it should?

Edit: Found a tutorial on Depth Testing. Works to some degree. I just need to to work with alpha properly. The texture is a square, and most of that square is see through, but when this square goes over another part it sometimes makes things see through as well..

Looks like order things are drawn is still important... It looks like if something is behind something it completely ignores it, even though part is visible, since the closer object is mostly transparent. Order is important since, if front object is drawn last, every thing looks great, but if front object is drawn first, things get wonky.

Edit 2: http://books.google.co.uk/books?id=...result&resnum=4 that link seems to explain things well. Looks like I'm going to have to do some work.

Edit: My solution. Drawing from back to front wasn't a real option. So I separated all the transparent parts and drew them separately at the end with glDepthMask(GL_FALSE);

You really need to sort your transparent objects from back to front. Even if you draw them last, you need to draw them in order otherwise they won't actually be drawn correctly since the blending depends on what is already in the source buffer. Check out the OpenGL Red and Blue books for details.

Also keep in mind that blending is slow, especially on the iphone.

Spite
Jul 27, 2001

Small chance of that...

krysmopompas posted:

2 pass method

That's a good technique, but you may not have the fill rate to burn on the phone.

You can do a very crude sort and get a decent image (especially if you have lots of inter-frame coherency, as was mentioned), and it doesn't sound like you have too many objects in the scene - so that should be cheap.

Spite
Jul 27, 2001

Small chance of that...

Luminous posted:

Matrix stuff

I'm not quite sure I understand what you are asking for, but if you can get a matrix you should be able to extract the values from it. The first column is the X axis, the second, the y, the third the z. The last is the position in that space. Is that what you are doing?

Check out http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/lookat.html

Spite
Jul 27, 2001

Small chance of that...

The1ManMoshPit posted:

Does anybody know of an alternative to glReadPixels on the iPhone (OpenGL ES 1.5)? I'm profiling a section of my code that needs to read some data that I've rendered into an FBO and a quarter of my time is spent just copying data out with glReadPixels. This seems especially ridiculous since the iPhone's video memory is actually shared main memory iirc, so it seems like I should just be able to get a pointer to it somehow which would obviously speed my code up immensely.

As a rule, you should never, ever use it. Ever. Really. What are you doing that requires the readback? Is there any other way you can do it?

Spite
Jul 27, 2001

Small chance of that...

Femtosecond posted:

I have a question that is sort of more a vector math question. I haven't had to deal with vector math for a few years and my old vector math text is sitting in a box at my parent's house so I'm not sure what to do.

Essentially I have a point in 3D space with an orientation. I want to draw a rectangle around that point so that when I change the direction it is oriented in the rectangle around it changes with it.

To create the rectangle I need a minimum extent vector and a maximum extent vector. I feel like the problem is one of offsetting the local xyz components of the target vector by some amount to calculate where this min/max extent vector would be and then finding the world position of this offset vector. I'm not sure how to do this last translation.

I was looking around the library I'm using but I couldn't figure out how to do what I wanted. Maybe I'm not on the right track after all.

Could anyone push me in the right direction?

If you assume the rectangle is at 0,0,0, you can generate the 4 vertices for your rectangle by adding/subtracting. You can then use the orientation vector you have as part of the basis for a rotation matrix. (ie, if the rectangle is 'facing' down +Z, then you use 1,0,0 as the X and 0,1,0 as the Y, while the orientation vector is Z) Is that what you mean?

Spite
Jul 27, 2001

Small chance of that...

haveblue posted:

-Transform the vertex normal by the normal matrix (which is the upper left 3x3 submatrix of the modelview matrix, neglecting nonuniform scaling), normalize the result, to get the eyespace normal.

It's the inverse transpose of the upper 3x3 of the modelview. Of course, with a uniform scaling matrix, this is the identity transform.

Your problem is probably applying the translation to the light vector, instead of just the rotation/scale.

Spite
Jul 27, 2001

Small chance of that...
Are your T,B,N vectors the handedness you are expecting? Maybe your bitangent is pointing the opposite direction or something.

Spite
Jul 27, 2001

Small chance of that...

Bonus posted:

So I made a simple 3ds loader (mostly by following this tutorial) for my cool car racing game. I load a 3ds model fine and then I load a .bmp texture file that I turn on before drawing the vertices of the model. But most 3ds (or obj) models that I've found don't come with .bmp textures. Are the textures encoded into the 3ds file itself? What's the usual (and simplest) way for loading a model with a nice texture in OpenGL? It doesn't even have to be in 3ds format, obj is fine too.

There really isn't. You'll need to roll your own, or find a library. As for the textures, I'd assume they'd be part of a material list of some sort but I'm not familiar with that file format. (your link doesn't really go into it). It may only contain references to actual texture files.
As for loading textures that aren't raw BMP, it's very dependent on your OS. OSX for example has ImageIO that can get the raw bits out.

Spite
Jul 27, 2001

Small chance of that...
This has a crapload of info on the PSX:
http://gshi.org/eh/documents/psx_documentation_project.pdf

The thing doesn't do perspective correct texturing, so you don't have to worry about that.

How familiar are you with rasterization in general? If not at all, start with Bresenham's line algorithm. Abrash has a ton of interesting stuff if you can find it. Foley and van Dam has a chapter on Raster algorithms too, but that book is way old.

Spite
Jul 27, 2001

Small chance of that...

heeen posted:

Can you cite anything for those claims? I'd love to read about it more in depth.
I thought changing the source of the vertex data wouldn't require a pipeline flush since vertices must be passed by value anyways, whereas shaders and uniform changes require the pipeline to be empty before you can change anything.

It depends on the hardware and driver. Typically, changing the active shader is the most expensive (well, unless you are uploading a large constant buffer or something). The costs are way more apparent on the CPU side than the hardware side in most cases though (because of the validation, etc the runtime has to do. OpenGL is worse than DX in this regard because of all its fixed-function legacy stuff).

Many games these days are still CPU bound (especially on the consoles) because stuff isn't batched well or merged well.

Spite
Jul 27, 2001

Small chance of that...

Bonus posted:

I optimized my terrain drawing by using display lists (since it doesn't change anyway). Now it runs fine. Is this acceptable or should I still look into vertex arrays and VBOs?

Also, I've started getting some strange segfaults in my little OpenGL based game. Since I have no idea where it could originate from, I thought I'd run valgrind on it. But when I run valgrind on it, I just get a million errors, a lot of them from seemingly innocent function calls like glEnable(GL_COLOR_MATERIALS) and such. Has anyone here ever used valgrind on OpenGL applications?

Which OS? If OSX, use libgmalloc and gdb to find your error. You're probably passing a pointer to something that's too big or too small. I see a ton of errors in apps that give a bad pointer to stuff like glVertexPointer.

And for the love of God, don't use display lists EVER. As said, use VBOs with STATIC_DRAW for your terrain. Chunk it up so you can cull out parts - the fastest triangle is the one you don't have to draw. You can still use one VBO and DrawRangeElements per piece.

Note that VBOs don't necessarily HAVE to be in VRAM - the driver makes that decision and will page stuff on and off based on load and pressure. STATIC_DRAW hinted buffers will very likely stay resident on the card though. Geometry tends to use much less space than texture data, as well.

EDIT: and now I realize most of this information is redundant with what's already been posted.

Spite fucked around with this message at 09:23 on May 7, 2010

Spite
Jul 27, 2001

Small chance of that...

heeen posted:

Display List do have great performance, especially on nvidia hardware. The compiler does a very good job at optimizing them. But as soon as you're dealing with shaders things will start to get ugly because there are problems with storing uniforms etc.

While you're at it, stick to the generic glVertexAttrib functions instead of the to-be-deprecated glVertexPointer/glNormalPointer/... functions.
You will probably need to write simple shaders for the generic attrib functions, though.

Display lists: that really depends on the platform, and the driver just makes them into VBOs anyway. The original idea of display lists is basically what DX11's deferred context paradigm is trying to get at, and even that has its problems. Everyone in the OGL world is trying to kill display lists, so it's really a bad idea to use them. (the problem being that you can't REALLY optimize them since they tell you nothing about the state at the time the list is created/used. Most CPU overhead is spent validating state and in associated costs - so long as you aren't sending lots of data to the GPU and converting between formats.)

And if you still have VertexPointer,etc, I'd still use them. Optimizations can be made in the driver (ie, in clip space) if the driver knows what the position attribute, and the modelview and projection matrices are. This won't be true forever, definitely, but you only have a limited number of vtx attribs and they are definitely reserved if you aren't running OGL 3.0.

Spite
Jul 27, 2001

Small chance of that...
Do not use Display Lists. They are deprecated and disgusting. Most drivers will convert then to VBO/VAO under the hood anyway.
They DO NOT help performance in the way most people think - because of their design the driver can't cache state and validation work, which is what takes all the time anyway.

Use VBO and put everything you can into VRAM. Keep in mind that stuff may be paged on and off the card as the driver needs. Use as few draw calls as necessary - if your hardware supports instancing, use that.

As for UBO, that spec is a mess. It's probably not that much faster than making a bunch of uniform arrays and updating those - although you can't updates pieces of it that way. You can also try gpu_program4 and just update the constant arrays.

Spite
Jul 27, 2001

Small chance of that...

haveblue posted:

You can't read directly from the depth buffer, you have to bind the previously rendered depth buffer as a texture and render the shader output into a different target.

Yeah - do a Z-prepass with color writes turned off. You might also be able to do something by mucking with the depth test and blending, but using a shader will be more straightforward.

Spite
Jul 27, 2001

Small chance of that...
Don't use copytex. Attach a depth texture to an FBO, render a z-prepass into it.

Then bind that texture, and draw into a different FBO with the shader that reads the depth value. You can also turn off depth writes, turn on color writes and use the same FBO.

Also, don't use GL_LUMINANCE - use ARB_depth_texture.
GL_DEPTH_COMPONENT_24 and GL_DEPTH_COMPONENT are the <internalformat> and <format>, respectively.

Spite
Jul 27, 2001

Small chance of that...
No, you can't draw to both the backbuffer and an FBO, nor should you want to. You can render to multiple color attachments via FragData.

It's much better to get into the habit of rendering to an FBO and then blitting that to the screen. The iPhone, for example, requires you to render into a renderbuffer and then give that to the windowing system to present.

You can just draw a fullscreen quad with an Identity projection matrix - that also allows you to do most processing effects easily.

Spite
Jul 27, 2001

Small chance of that...

haveblue posted:

To be pedantic, I think that's a property of OpenGL ES, not the iPhone specifically.

Also, nobody learns to do this because one of the Xcode project templates contains all the GL setup and frame submission code :v:

Well, if you mean that there's no backbuffer and all rendering must be done into an FBO, then sure. However, it would be nice if you could present a texture instead of a renderbuffer, etc.


ultra-inquisitor:
Pass your modified 'pos' as a varying and set the output red channel to w and see what it's being set to. I agree with OneEightHundred though, it sounds like the shader isn't bound.

Spite
Jul 27, 2001

Small chance of that...
I'm confused as to what you are asking, but doing say you combine the rotation and translation matrices of the first bone into M1.

M1 = T1*R1.

Then the end of your tentacle is at V1 = M1 * V0.

You can Rotate the next bone around its center via M2 = M1 * R2.
Or you can rotate the transformed point via M2 = R2 * M1.

Which do you want?

Or you can just make the matrix yourself
Think of the first 3 columns as the axes of a coordinate space. (the first is the x-axis, the second the y-axis, etc)
That lets you define which direction the bone is facing. The last column is position.

Spite fucked around with this message at 06:10 on May 17, 2010

Spite
Jul 27, 2001

Small chance of that...

YeOldeButchere posted:

A little while ago I asked about the performance of branching in shaders, thanks for the answers by the way, but I'd like to know where to find that sort of info myself. I'm guessing this is GPU dependent and outside the scope of specific APIs which would explain why I don't recall anything about this mentioned in the D3D10 documentation. A quick google search doesn't seem to return anything useful either. I've found some info on NVIDIA's website but it's mostly stuff about how some GPU support dynamic branching and very little about performance. Ideally I'd like something that goes in some amount of detail about modern GPU architectures so I can really know why and when it's not a good idea to use branching in shaders, preferably with actual performance data shown.

I guess the real answer here would be to write some code and test the drat thing myself, but I'm wary of generalizing whatever results I'd get without better understanding of the underlying hardware.

That's all really proprietary stuff, so I doubt you'd get the actual numbers. On modern cards, I think it's around the order of 2000-4000 pixels for branching and discard. So if you can reasonably assume a block of that size will go down that path, you'll get the branch prediction benefit. Prediction misses are really expensive, so if you have a very discontinuous scene (in terms of code path), your perf will tank.
Of course, testing is the best way to determine this, as you say.

Spite
Jul 27, 2001

Small chance of that...

UraniumAnchor posted:

edit: Never mind, now I'm wondering why the hell noise() is always returning 0.0. Apparently it's designed that way. What the christ good is that?

edit2: Victory! Sorta. The reflection isn't quite right.



As a note, no GPU implements noise() in hardware. So don't ever use it.

Spite
Jul 27, 2001

Small chance of that...

heeen posted:

code:
extension GL_ARB_texture_gather not supported in profile gp4fp
Huh?

You're using CG? It tries to emulate directx by offering "profiles" - unfortunately they don't expose all the card's features (for example, it requires gpu_program4 instead of gpu_shader4).

What are you trying to use texture_gather for? Shadow Maps?

Spite
Jul 27, 2001

Small chance of that...
I'm not sure what your skill level is, but all textures need Texture Coordinates, which are values that determine the texture mapping. A mapping from 0 - 1.0 will give the entire texture in a specific dimension, whereas 0 - 0.5 will give half of it, etc. Usually this means you have two coordinates for the x and y dimensions of a quad. They are usually called s and t.

Do you know how VBOs work? You'll need to set up your data in VBOs (please don't use Begin/End).
You'll also need to Gen and upload the data to the GPU via TexImage2D. You can use glTexParameteri to set the filtering modes. There are Magnification filters (to magnify) and minification filters (to minify). GL_LINEAR is bilinear filtering and GL_LINEAR_MIPMAP_LINEAR is trilinear (which only makes sense for a minification filter).

Then setup the VBO:

glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexPointer(3, GL_FLOAT, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, tcVBO);
glTexCoordPointer(2, GL_FLOAT, 0, NULL);

turn on state:

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

bind the texture:

glBindTexture(GL_TEXTURE_2D, tex);
glEnable(GL_TEXTURE_2D);

draw:

glDrawArrays(GL_QUADS, 0, 4);

Try the Superbible/blue book instead of the red book, because the red book is old.

Spite
Jul 27, 2001

Small chance of that...

Dijkstracula posted:

Sadly, the first half of the Superbible still uses immediate mode / BEGIN/END, so it's only marginally better :sigh:

That's because Benj has decided he's never writing a book again. I'd love to spend a few months writing a really good tutorial, but I think my employer may frown on that.

As for shadows, it depends on what you are targeting and what you are doing. Very old stuff and mobile phones with little VRAM will probably benefit from Shadow Volumes if you want to save memory. They also tend to have crappy fill-rate, which puts you in a bind since that technique burns fill.

Shadow mapping is the most common implementation, but keep in mind it's harder to have a light the player can walk all the way around cast correct shadows. Think of a brazier or campfire - since you have to draw from the light's POV, you'd have to draw the scene 4-6 times to get all the info you need to do the shadowing. Or you could try something wonky with a 360 degree FOV, but I've never attempted that myself. Or just have certain objects be casters and only draw in that direction.

Spite
Jul 27, 2001

Small chance of that...
On Learning 3D:

Really, though, it's better to learn the fundamentals of 3d graphics and real-time rendering and not tie yourself to an API.

Once you get to the API level the rules of thumb are pretty simple:

Put everything into vertex buffers
Try to keep your scenes smaller than VRAM so you don't have to page
Do minimal draw calls (cull out stuff that's not visible)
Sort everything so you have as few state changes as possible - especially shader and rendertarget state (this is really freaking important)

There's more, of course, but that's a decent start.

Spite
Jul 27, 2001

Small chance of that...
If you have a membership, I highly recommend checking out the WWDC OpenCL videos.

As for the guy that wrote those - he worked with the OpenCL/GL team last year for a WWDC presentation. That's the Molecule demo if any of you went to the 09 conference. All that info is the result of that work.

Spite
Jul 27, 2001

Small chance of that...
I hate to be the guy that's all like "READ THE SPEC" but it's pretty well defined:

http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
From Page 186:
The image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice

So you've got the data backwards. It's talking about glTexImage3D here because the spec is a mess, but a 2d texture is basically a 3d texture with no depth.

Spite
Jul 27, 2001

Small chance of that...
I'm not familiar with glfw, but I'd stay away from GLUT. SDL also works.
You're probably better off handling context creation/destruction yourself in general.

Most image formats dictate endianness, so you shouldn't have to worry about that. OSX has the ImageIO library that will handle most formats. Model loading and animation is the hard part.

And the usual caveats:
Use vertex buffer objects and frame buffer objects
Batch your state together and change state as little as possible
That said, don't make things overcomplicated

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...
The thing about the Superbible is that there are 3 authors, with different parts by each. The original (super old) book was written by Richard Wright. Then Benj Lipchak wrote the shader/more recent stuff. The forth edition (whenever it became the blue book) has another guy, but I don't know him. It's not a bad place to start, but OpenGL is in an odd place right now and a new book would hopefully cover 4.0 - except that no one has actually written a 4.0 app, so who knows!

For learning, you're better off at tutorials or asking here.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply