Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lord funk
Feb 16, 2004

I'm getting some pretty crusty lines with OpenGL ES 2.0. I am multisampling / anti-alising, but lines with glLineWidth(1.0) are pretty chunky:


(image zoomed a bit)

Is there a common solution to this that I'm missing?

Adbot
ADBOT LOVES YOU

lord funk
Feb 16, 2004

I feel like I'm thiiiiiiis close to understanding OpenGL + VBO + VBA. But maybe not:

I know you can apply matrix transformations to transform objects, but can you change the actual vertex positions in between frames?

code:
GLfloat _vertexData = {
//x, y,                   r, g, b, a
        -1.0f, -1.0f,      0.5f, 0.0f, 0.0f, 1.0f,
        1.0f, -1.0f,       0.0f, 0.5f, 0.0f, 1.0f,
        1.0f,  1.0f,       0.0f, 0.0f, 0.5f, 1.0f,
        -1.0f, 1.0f,       0.5f, 0.5f, 0.0f, 1.0f
};

- (void)setupGL {
   glGenVertexArraysOES(1, &_vertexArray);
    glBindVertexArrayOES(_vertexArray);
    
    glGenBuffers(1, &_vertexBuffer);
    glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
    glBufferData(GL_ARRAY_BUFFER, 4 * 6 *sizeof(GLfloat), _vertexData, GL_DYNAMIC_DRAW);
    
    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
    glEnableVertexAttribArray(GLKVertexAttribColor);
    glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(8));
}
Is there a way to move the x/y coordinates in _vertexData? Is there a way to alter color data? If not, what should I be doing?

EDIT: is it glBufferSubData? and if it is, how come I always figure things out 2 minutes after asking?

lord funk fucked around with this message at 23:19 on Jan 12, 2013

lord funk
Feb 16, 2004

Thanks - that makes sense. I'm thinking of making two vertexArrays, one with static objects that will be moved using matrix transformations, and another with dynamic objects whose vertices will change each frame. Does that sound like a good idea?

lord funk
Feb 16, 2004

Seems to be working great. Screenshot:



The circles and pac-man wedges are fixed coordinates, but the lines connecting all the touch points change their vertex x/y position each frame.

lord funk
Feb 16, 2004

Raenir Salazar posted:

trying to navigate the differences between all the versions of Opengl that exist vs what we're using.

The entire internet needs a filter based on the version of OpenGL you actually want to learn about. I remember what a nightmare it was to teach myself OpenGL ES 2.0, and all the answers I found were 1.x.

lord funk
Feb 16, 2004

Raenir Salazar posted:

This got me bonus marks during the demonstration because apparently I was the only person to actually have the curiosity to play around with shaders and see what I could do, crikey.
I met a bunch of iOS developers who couldn't understand why I would teach an iOS development college course. They were worried about job security and thought I would be flooding the market with new developers. The reality is only one out of twenty students has the drive or curiosity to actually make anything of it.

lord funk
Feb 16, 2004

This isn't strictly 3D graphics related, but how do you connect your renderer to your models once you have a larger project (many variable objects, maybe different scenes, etc.)? Lots of sample code I'm learning from just tosses a model into the renderer file that manages the graphics, but that seems totally wrong.

Is there a good structural paradigm that everyone uses?

lord funk
Feb 16, 2004

I'm trying to render sharp edged, geometric shapes. It's generated vertex data (not loaded from a model), and I'm updating vertex positions each frame. So I have to calculate surface normals.

What I'd like are each of the triangles to appear flat and sharp-edged. What I have are nice smooth normal transitions:



It seems to me that I can't get sharp surface normals because I'm using indexed rendering (so my vertices are shared between surfaces). Do I have to stop using indexed rendering and instead just duplicate vertices? Or is there a trick to this that I don't know?

lord funk
Feb 16, 2004

Thanks for the info. I should probably point out I'm using Metal, but frankly it's good to get suggestions from more mature graphics APIs so I can dig around and see if there's anything similar.

Working with duplicated vertices:

lord funk
Feb 16, 2004

Shouldn't you be looking into the performance shaders for a blur effect? I thought that's specifically what those were made for.

lord funk
Feb 16, 2004

Can anyone recommend a cube map / skybox stitcher for Mac? Something that can take iPhone panoramas or a series of photos to create the 6 box-side images for a cube map.

lord funk
Feb 16, 2004

Torabi posted:

So we recently started our first 3D Programming course at uni. I chose OpenGL and have been learning with the help of LearnOpenGL which is fantastic. But my classmates who chose DX are struggling to find proper learning material online, which is odd since the teacher said the exact opposite was going to happen. The only LearnOpenGL counterpart that they can find is this site http://www.directxtutorial.com/ which costs $50 if you want access to the whole thing. So I was wondering if any of you DX people know of any good sites that teach you DX? (DX11 to be exact) I've been looking around too but I can't seem to find anything decent.

:psyduck: you are taking a class where your prof just lets you choose whatever tools you want? How does that work at all?

lord funk
Feb 16, 2004

Torabi posted:

we have three DX assistants and one OpenGL assistant that go through our assignments

Ah okay that explains a lot to me.

lord funk
Feb 16, 2004

I feel like an idiot for asking, but how do you set individual modelViewMatrix matrixes for different models on screen in Metal? In OpenGL I would just set the uniform's modelViewMatrix before each draw call, but Metal is only using the last one I set.

I'm pretty sure this is by design, but I'm just blanking in how to do this.

edit: oh I think I have to make my uniforms buffer an array and fill it with the individual model matrixes

edit 2: no, I take it back. I have no idea how to do this.

lord funk fucked around with this message at 17:48 on Dec 12, 2015

lord funk
Feb 16, 2004

Doc Block posted:

Keep in mind that Metal doesn't draw anything when you encode a draw call. So if you make a single, universal uniform buffer for modelViewProjectionMatrix and friends and then keep changing the contents before committing the command buffer, only the last contents will be in there when it actually does the draw calls.

Figured that part out. So I have to create a MTLBuffer of size models.count* sizeof(uniforms), and then somehow pass a model index to look up the correct uniforms? Is that about right?

lord funk
Feb 16, 2004

Doc Block posted:

If that's what you want to do, yeah.

My engine cheats and just gives each object its own uniforms buffer. So it attaches their buffer before issuing the draw call for them, and since each -setVertexBuffer... call is an individual command it's OK.

Most objects don't change often in my game, so their uniforms don't change often and so it was easier to just have the objects cache them in a Metal buffer and set that buffer at draw time. Please don't laugh at me...

No, your way is exactly what I needed. :)

lord funk
Feb 16, 2004

So when rendering a face, the normal points in one direction, so the 'front' of the face will reflect light, but the back does not. Is there a way to get the back to act the same way as the front? i.e., is there a way to detect that the face is being drawn facing the wrong-way-round so I can invert my normal?

edit: just realized, can I just check the clockwise-ness of the face when I update the normal? going to try that out...

lord funk fucked around with this message at 16:51 on Feb 16, 2016

lord funk
Feb 16, 2004

A general rendering question: I've got a scene in Metal where there are a bunch of floating stretched icosphere models. When the camera is zoomed out, the frame rate is a solid 60fps. But zooming in causes the frame rate to drop.

Zoomed in / out pictures:
http://imgur.com/a/QibKz

Is there a general cause to this? something I can do to mitigate it?

lord funk
Feb 16, 2004

Doc Block posted:

How complicated/expensive is your fragment shader? How many values are being interpolated for each fragment? What's the blend mode?

I did a test and made a fragment shader that just returns a constant color. Still chugs when zoomed in. I've also tried turning blending off completely, and depth testing on / off.

Here is a video of it in action (the results are the same even when not blending):
https://www.youtube.com/watch?v=b7vut8k_tOc

quote:

edit: is this on iOS or OS X? I'm not familiar with Metal on OS X (my 2011 iMac is too old for it). I know some of the alignment requirements are different. And if your buffers are in system memory instead of VRAM then obviously things will be slower.

OS X. Yep - everything on OS X has to be 256 byte aligned.

I am going to look into the memory location. Xcode is telling me that I'm using all CPU and no GPU in the debug view, but that may just be Xcode being its usual POS broken self (the Metal system trace tool isn't even supported on OS X).

lord funk
Feb 16, 2004

Well it's not because of filling with triangles - lines do the same thing.

Here is a new datapoint: this seems to only happen on my hi-dpi monitor. At 1280x800 on a projection it runs at 60fps.

@Doc Block: I was thinking something similar, like maybe once the model vertices are behind the viewport they get stretched to infinity or something. But I'm not sure what to change here.

@Sex Bumbo: I haven't included any visibility testing on my end, not to say that I'm missing something about this engine.

lord funk
Feb 16, 2004

I'm on a late 2013 Mac Pro w/ AMD FirePro D500 3072 MB graphics. I'm going to leave the issue alone for now, since I guess it'll work fine when I present it. There is a slight difference in the number of pixels this thing is pushing on the display v. projector:

Only registered members can see post attachments!

lord funk
Feb 16, 2004

Apple's been releasing some half-baked tools just to get features out the door. Like Doc said, the tools are already there on iOS. Then they got just enough support on OS X to say gently caress it, we'll finish it later.

I'm fairly sure we'll see announcements about tool support on OS X at WWDC next month.

lord funk
Feb 16, 2004

I'm looking for examples of cool / creative fragment shaders. Basically anything that's fun or interesting. Is there a place where people post these? or does anyone have a neat example?

lord funk
Feb 16, 2004

Anyone have a good example of using touch movement to rotate a 3D object around its center axis? I know I have to transform the angle based on the camera view matrix, but I also have that problem where when you lift your touch and put it back down, the model matrix is oriented to its last position and doesn't match what you might consider 'up' and 'down'.

edit: nm this looks like a good one:
http://www.learnopengles.com/rotating-an-object-with-touch-events/

lord funk fucked around with this message at 17:37 on Nov 16, 2017

lord funk
Feb 16, 2004

UncleBlazer posted:

If you're happy doing matrix manipulation then it's cool but I'd recommend quaternions for rotations, I found them less of a headache. Not that it answers your touch issue though, sorry!

Yep, quaternions are great!

New question: I want to do a kind of hybrid ortho / projection display of 3D objects. I want to be able to project them, but then translate along the screen's 2D plane wherever I want them on the screen. Like this:



So the hex shapes there are orthographic projection, and I want the shapes to 'hover' over them.

How should I do this? I thought I could make a vertex shader that just takes the final position and translates the x/y coordinate, but that's still in 3D projected view space. I want to just shift the final projected image.

lord funk
Feb 16, 2004

Xerophyte posted:

If you want an in-viewport translation with the perspective preserved then you can also just change the viewport transform. Render to texture makes sense if you intend to reuse the result in several places or over several frames.

Yeah that makes total sense! Thanks for the approach details.

I do want to render the objects each frame, so they can react to environment lighting changes.

lord funk
Feb 16, 2004

Zerf posted:

Heh, funny you should bring this up. I just implemented this last week. My solution was to handle it in the shader. Since perspective transform is non-linear, it means that each affected vertex now needs to be multiplied by two matrices instead of one and some meddling between the multiplications. Quite a simple solution, but it works well.

Really? Cool :) Would you be willing to share a bit of your shader transformations code? I tried Xerophyte's answer, but I'm probably just messing something up along the way.

Again just to be clear we're on the same page this is what I'm on about :

https://i.imgur.com/R2LKQY3.mp4

lord funk
Feb 16, 2004

Oh my god what is it about posting on the internet that you immediately figure it out. Done. Thanks all, and especially thanks Xerophyte cause your answer was awesome.

lord funk
Feb 16, 2004

This seems simple but I can't figure out how to do a blend mode in Metal that acts like Photoshop's Difference blending, especially where changing the alpha (opacity in PS) works the same way. My issue is that it is always visible, even when the source alpha is zero.

This is what I have:

code:
colorAttachments[i].isBlendingEnabled = true
colorAttachments[i].rgbBlendOperation = MTLBlendOperation.subtract
colorAttachments[i].alphaBlendOperation = MTLBlendOperation.add
colorAttachments[i].sourceAlphaBlendFactor = .zero
colorAttachments[i].sourceRGBBlendFactor = .sourceAlpha
colorAttachments[i].destinationAlphaBlendFactor = .one
colorAttachments[i].destinationRGBBlendFactor = .one
My brain is broken. Every time I think I understand it, I'm way off.

lord funk fucked around with this message at 01:32 on Mar 20, 2019

lord funk
Feb 16, 2004

Yeah, you're both right, there isn't a blending-only solution for the PS difference behavior. I should be able to roll my own fragment shader that takes care of it -- just now looking into how I can reference the destination color within the shader.

lord funk
Feb 16, 2004

I want to draw some 3D shapes on top of everything else on the screen. The problem is, it depth stencil tests the shapes, so they clip into other 3D shapes on the screen.

Currently, I solve this by doing a second render pass to draw on top of the existing texture. But that seems to add a bunch of overhead.

Is there a way to trick the depth check into drawing a shape over everything else? or am I stuck doing two passes?

lord funk
Feb 16, 2004

Thanks all, and yeah, using Metal. I'm so close to nailing down my engine and this is all part of the optimization process.

Adbot
ADBOT LOVES YOU

lord funk
Feb 16, 2004

I say learn Blender. If (a long way) down the road you need to migrate to something else, you will be able to take your knowledge from Blender and translate it to the new app / UI. That way you will be starting from a place of experience and knowledge :eng101:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply