Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
baldurk
Jun 21, 2005

If you won't try to find coherence in the world, have the courtesy of becoming apathetic.
I haven't tried this, but off the top of my head what if you passed a vertex stream with ptex style UVs - i.e 0 to 1 in both directions across the quad. Then you could just define a distance function to one of the edges and return that as your colour for anti-aliased lines, and clip when it's == 0. Maybe with some extra parameters if you want lines that are consistent pixel width.

There might be a smarter way to do it that avoids an extra vertex stream, or at least generates it in the VS from less data. If you're using triangle lists and always have quad triangles back-to-back in that list, you could maybe generate it from vertex ID I think?

Adbot
ADBOT LOVES YOU

netcat
Apr 29, 2008
Yeah something like that should work, I'll have to try later. Since my lists are all quads the clever approach should probably work as well but I'll try the obvious one first...

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I feel like this is something you could use the geometry shader for maybe? I could see that becoming a problem if you're using triangle strip to draw the quad, but if not you can model your vertex data in such a way that the diagonal is the last vertex pair, and output a line strip for only the first two vertex pairs.

E: Although, I guess doing it like this any assumptions about winding order goes out the window

Joda fucked around with this message at 18:27 on Sep 14, 2016

Doc Block
Apr 15, 2003
Fun Shoe
Can't you just draw lines around the quads after you've drawn the quads themselves? Like, draw the quads, then bind a shader that outputs a solid color, specify GL_LINES instead of GL_TRIANGLES or whatever for the second draw call along with an array that just has the quad corners, etc.

Doc Block fucked around with this message at 19:38 on Sep 14, 2016

netcat
Apr 29, 2008

Doc Block posted:

Can't you just draw lines around the quads after you've drawn the quads themselves? Like, draw the quads, then bind a shader that outputs a solid color, specify GL_LINES instead of GL_TRIANGLES or whatever for the second draw call along with an array that just has the quad corners, etc.

I think that can have problems with z-fighting? I've actually done this before and then I did draw everything twice with combination of glPolygonOffset and glDepthRange to avoid z fighting but it would be nice to do it in a single draw call. Also I thought glPolygonOffset was deprecated but I guess according to https://www.opengl.org/sdk/docs/man/html/glPolygonOffset.xhtml it's in 4.5

Doc Block
Apr 15, 2003
Fun Shoe
I guess I just assumed this was for something 2D, so you'd have the depth buffer turned off

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

netcat posted:

I think that can have problems with z-fighting? I've actually done this before and then I did draw everything twice with combination of glPolygonOffset and glDepthRange to avoid z fighting but it would be nice to do it in a single draw call. Also I thought glPolygonOffset was deprecated but I guess according to https://www.opengl.org/sdk/docs/man/html/glPolygonOffset.xhtml it's in 4.5

Yeah, when I did this, I used the evil option of gl_FragDepth = gl_FragCoord.z - 1e-6; in my fragment shader.

Sex Bumbo
Aug 14, 2004

Suspicious Dish posted:

Yeah, when I did this, I used the evil option of gl_FragDepth = gl_FragCoord.z - 1e-6; in my fragment shader.

:cry:

If you set your depth testing to equal or less equal (or greater equal depending) and use the same matrices, it should generally work without z fighting. At least in my experience.

Also a fixed function depth offset should disrupt the pipeline less right?

Sex Bumbo fucked around with this message at 05:43 on Sep 15, 2016

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
My recollection is that glPolygonOffset did nothing compared to the glFragDepth hack, but maybe I was just being an idiot.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm working on some hacky way to make a sub-surface scattering-like effect where I basically just do a Gaussian over a texture that maps unto an object based on how its original texture maps, but I'm not sure how to deal with seams. I have an idea where I have a separate lookup texture that refers to wherever the seam links to in case the filter tries to look up values that go across a seam, and generating this texture is where I'm stuck. Is there an algorithm of some sort that is able to somehow identify both sides of a seam? Like say I give it a model with UVs and it returns a list of edges (index pair pairs) that constitute seams.

I tried googling, but all I get is about seam removal from textures/models.

Joda fucked around with this message at 00:20 on Oct 8, 2016

Raenir Salazar
Nov 5, 2010

College Slice
I am using Assimp to load a Rigged Mesh into an OpenGL application; this application is meant to allow me to manually manipulate the bones and see it likewise transform the mesh in real time.

Everything works, except that I want to rotate the bones so that they rotate via the global Z axis facing the camera.

So that suppose a hand is facing mostly towards me, the hand rotates as though it's joint's Z axis was pointed towards the screen.

Right now I have this


result.

The bones rotate according to their local blender roll axis.

Code:

code:
glm::mat4 CurrentNodeTransform;
CopyaiMat(&this->getScene()->mRootNode->FindNode(boneName.c_str())->
mTransformation,
 CurrentNodeTransform);

glm::mat4 newRotation = glm::rotate(angle, glm::vec3(0.0, 0.0, 1.0));
 

CurrentNodeTransform = (CurrentNodeTransform * newRotation);
 
CopyGlMatToAiMat(CurrentNodeTransform,
 this->getScene()->mRootNode->FindNode(boneName.c_str())->mTransformation);  
  
I posted to the OpenGL/Vulkan forums and they suggested IIRC this change:

quote:

Obtain the transformation from bone space to view space (i.e. the view transformation multiplied by the global bone transformation). Invert it to get the transformation from view space to bone space. Transform the vector (0,0,1,0) by the inverse to get the bone-space rotation axis.

I am probably misunderstanding this but,

I now have this:
code:
glm::mat4 BoneToView = View * CurrentNodeTransform;
glm::mat4 BoneToViewInverse = glm::inverse(BoneToView);
glm::vec4 BoneSpaceRotationAxis = 
     BoneToViewInverse * glm::vec4(0, 0, 1, 0);
printf("Printing BoneSpaceRotationAxis");
PrintVector4(BoneSpaceRotationAxis);
glm::mat4 newRotation = 
     glm::rotate(angle, glm::normalize(glm::vec3(BoneSpaceRotationAxis)));

CurrentNodeTransform = CurrentNodeTransform * newRotation;  
But it still seems to rotate about their blender roll axis (just slightly differently).



Lines for the axis it rotates around, circle if it happens to be pointed at the camera.

CurrentNodeTransform is the transform of the bone relative to it's parent, I've tried cramming in there at different locations a version that had the local transform multiplied by it's parent transform but just made things weird.

:smith:

e: Edited a couple of times to not break the forum CSS.

Raenir Salazar fucked around with this message at 18:39 on Oct 27, 2016

Raenir Salazar
Nov 5, 2010

College Slice
Aaaaand I solved it, no idea how I was so close before but somehow fumbled the ball at the last second, I feel like I would've tried this permutation of multiplications and I just don't know how I missed it.

code:
glm::mat4 LocalNodeTransform;
glm::mat4 GlobalNodeTransform;
CopyaiMat(&this->getScene()->mRootNode->FindNode(boneName.c_str())
->mTransformation, LocalNodeTransform);
FindBone(boneName, this->getScene()->mRootNode, glm::mat4(1.0), GlobalNodeTransform);
//glm::mat4 newRotation = glm::rotate(angle, glm::vec3(axis.x, axis.y, axis.z));


glm::mat4 View = glm::lookAt(
    glm::vec3(0, 0, 1), // Camera in World Space
    glm::vec3(0, 0, 0), // and looks at 
    glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down)
    );

// NEED GLOBAL ROTATION
//CurrentNodeTransform = newRotation * CurrentNodeTransform;
//CurrentNodeTransform = (CurrentNodeTransform * newRotation);

glm::mat4 BoneToView = GlobalNodeTransform * View;
glm::mat4 BoneToViewInverse = glm::inverse(BoneToView);
glm::vec4 BoneSpaceRotationAxis = BoneToViewInverse * glm::vec4(0, 0, 1, 0);
printf("Printing BoneSpaceRotationAxis");
PrintVector3(glm::normalize(BoneSpaceRotationAxis));
glm::mat4 newRotation = glm::rotate(angle, 
glm::normalize(glm::vec3(BoneSpaceRotationAxis)));

LocalNodeTransform = m_GlobalInverseTransform * 
newRotation * LocalNodeTransform; 
\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^!!!!!!!

CopyGlMatToAiMat(LocalNodeTransform, this->getScene()->mRootNode->
FindNode(boneName.c_str())->mTransformation);
I had the general idea to separate out my global and local transforms but somehow whiffed and got my multiplication order mixed up at the last second; I had it right originally to just not with the Inverse BoneToView matrix there.
Second to last line needed to be:

code:
LocalNodeTransform = LocalNodeTransform * newRotation;
not:
code:
LocalNodeTransform = m_GlobalInverseTransform * newRotation * LocalNodeTransform;
Which was just me randomly trying things and got the order wrong.

Raenir Salazar fucked around with this message at 22:47 on Oct 27, 2016

gonadic io
Feb 16, 2011

>>=
Any idea why my verts are off when trying to draw instanced cubes?



code:
#version 150 core

in vec4 a_Pos;
in vec2 a_TexCoord;
in vec3 a_Translate;
in float a_Scale;
out vec2 v_TexCoord;

uniform Locals {
	mat4 u_Transform;
};

void main() {
    v_TexCoord = a_TexCoord;
    gl_Position = u_Transform * (a_Pos * a_Scale + vec4(a_Translate, 1.0));
    gl_ClipDistance[0] = 1.0;
}
The cube's verts (a_Pos) are defining the cube between (0, 0, 0) and (1, 1, 1), the scale (a_Scale) and translate (a_Translate) of the instances are
code:
[
	 Instance { translate: [0, 0, 0], scale: 0.5 }, 
	 Instance { translate: [0.5, 0, 0], scale: 0.5 }, 
	 Instance { translate: [0, 0.5, 0], scale: 0.5 }, 
	 Instance { translate: [0.5, 0.5, 0], scale: 0.5 }, 
	 Instance { translate: [0, 0, 0.5], scale: 0.5 }, 
	 Instance { translate: [0.5, 0, 0.5], scale: 0.25 }, 
	 Instance { translate: [0.75, 0, 0.5], scale: 0.25 }, 
	 Instance { translate: [0.5, 0.25, 0.5], scale: 0.25 }, 
	 Instance { translate: [0.75, 0.25, 0.5], scale: 0.25 }, 
	 Instance { translate: [0.5, 0.25, 0.75], scale: 0.25 }, 
	 Instance { translate: [0.75, 0.25, 0.75], scale: 0.25 }, 
	 Instance { translate: [0, 0.5, 0.5], scale: 0.5 }, 
	 Instance { translate: [0.5, 0.5, 0.5], scale: 0.5 }, 
]
It looks like the smaller cube is both in the wrong place and being drawn to the wrong scale.

My code is in Rust using gfx, and located here, but I'm pretty sure it's my shader code loving up here.

e: it could be that my w-parameter (which is defined to be 1 for each vertex) is getting scaled.
e2: Yep, that was it. Lesson learned, don't multiply the w-param when doing local scaling.

gonadic io fucked around with this message at 15:10 on Nov 13, 2016

Raenir Salazar
Nov 5, 2010

College Slice
So, for some reason,

code:
glm::rotate(glm::mat4(1.0), RotateX, glm::vec3(1, 0, 0))
Does not work, but:

code:
glm::rotate(glm::mat4(1.0), glm::radians(RotateX), glm::vec3(1, 0, 0))
Does.

The foremost code results in rotations in, approximately 30 degree increments for each unit of rotation. So Y: 1 results in my model rotating about 30 degrees.

And thus Y: 90 looks like it goes something like 15 to 30 degrees too far.

But glm:radians works.

This is completely at odds with the documentation for GLM as far as I can tell when it says "angleInDegrees" for the second parameter.

What gives?

Nehacoo
Sep 9, 2011

Raenir Salazar posted:

So, for some reason,

code:
glm::rotate(glm::mat4(1.0), RotateX, glm::vec3(1, 0, 0))
Does not work, but:

code:
glm::rotate(glm::mat4(1.0), glm::radians(RotateX), glm::vec3(1, 0, 0))
Does.

The foremost code results in rotations in, approximately 30 degree increments for each unit of rotation. So Y: 1 results in my model rotating about 30 degrees.

And thus Y: 90 looks like it goes something like 15 to 30 degrees too far.

But glm:radians works.

This is completely at odds with the documentation for GLM as far as I can tell when it says "angleInDegrees" for the second parameter.

What gives?
Are you sure that you are looking at the documentation for the right version?

Documentation for 0.9.8 says that the angle should be expressed in radians:
http://glm.g-truc.net/0.9.8/api/a00169.html#ga161b1df124348f232d994ba7958e4815

From the GLM manual:
"7.11. What unit for angles is used in GLM?
GLSL is using radians but GLU is using degrees to express angles. This has caused GLM to use inconsistent units for angles. Starting with GLM 0.9.6, all GLM functions are using radians."

Raenir Salazar
Nov 5, 2010

College Slice
That explains it! Google brings me 0.9.3~ish versions of the manual for some reason.

Nehacoo
Sep 9, 2011

Is there any particular reason why glDepthFunc accepts GL_NEVER and GL_ALWAYS? I can't think of any.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Nehacoo posted:

Is there any particular reason why glDepthFunc accepts GL_NEVER and GL_ALWAYS? I can't think of any.

I could see a theoretical reason for pairing with stencil states with depth-pass/depth-fail behavior. Not sure why you'd ever WANT to do that, but that's not usually a good reason to restrict something from an API by itself.

Nehacoo
Sep 9, 2011

Hubis posted:

I could see a theoretical reason for pairing with stencil states with depth-pass/depth-fail behavior.

Hmm, I can't see it because I think that is beyond my current level of understanding of OpenGL :v: (I haven't touched upon stencils at all)

Hubis posted:

Not sure why you'd ever WANT to do that, but that's not usually a good reason to restrict something from an API by itself.

I get your point, but sometimes I wish OpenGL was a bit more restricted so that mortals could comprehend it. I understand that has more to do with legacy, though.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Nehacoo posted:

Hmm, I can't see it because I think that is beyond my current level of understanding of OpenGL :v: (I haven't touched upon stencils at all)


I get your point, but sometimes I wish OpenGL was a bit more restricted so that mortals could comprehend it. I understand that has more to do with legacy, though.

Yeah, OpenGL is basically the worst of all worlds in that sense: half the API is a relic to a style no one uses any more, the other half is a gross hack towards modern techniques that is inelegant because of the need for compatibility with legacy features, and none of it really matches the way hardware ACTUALLY works nowadays.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
I made an effort to learn 'modern' opengl4 not too long ago, and it seems to me like you should only use the minimal amount of actual gl calls in your code to load up your card, and then perform all of the transformations, coloring, masking, etc in shaders. Heck, I've been doing collision detection with the compute shaders; all I have to do is chuck all of my data structures at the graphics card, and fill it back in after every go.

I don't know how it might work with es though.

I think the main problem with the gl api is that, due to opengl's history, there are like a dozen ways to put a cube on the screen, some of which are not good, and it's difficult for beginners to pick a starting point for study.

dougdrums fucked around with this message at 17:02 on Dec 12, 2016

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
If you really want to learn a low level graphics API these days you should probably just go straight to Vulkan. It's big and complicated and the drivers aren't super stable yet, but at least it makes sense.

The Gay Bean
Apr 19, 2004
I'm working on mesh processing, and I'm finding it hard to settle on an IO library / format. Dealing with textures is the most problematic.

An example: I want a library that makes the following process easy.
1. Load an untextured mesh consisting of a vertex list and face list (no texture information) from file_0.ext
2. Assign texture coordinates to some of the vertices, referencing image 1.
3. Write file_1.ext
4. Assign texture coordinates to some of the vertices, referencing image 2.
5. Write file_2.ext
6. In a new binary, load file_0.ext, load file_1.ext and file_2.ext into memory.
7. Retrieve the texture coordinates and material properties (including image file path) for the vertex in file_1.ext and file_2.ext corresponding to vertex i in file_0.ext.

I've messed with AssImp and OpenMesh so far, and it seems like their internal data structures do not reflect this sort of structure. AssImp treats a mesh that has several materials as several different meshes, each with a separate vertex list, which destroys the original structure of the index vector. Thus, there is no connection between the original index vector and that which is written in file_1.ext and file_2.ext. OpenMesh's support for texture coordinates seems to be completely broken.

It seems like the Wavefront OBJ format is designed to handle situations like this. e.g.

code:
v 0 0 0
v 0 1 0
v 1 0 0
v 1 1 0

vt 0 0
vt 0 1
vt 1 0
vt 0.25 0.25
vt 0.25 0.5
vt 0.5 0.25

g tex_0
usemtl mat_0
f 0/0 1/1 2/2

g tex_1
usemtl mat_1
f 1/3 2/4 3/5
So has anybody come across a C++ library that allows the easy preservation of mesh topology while working with textures as in above circumstance? I guess I could just write my own; obj isn't a very complex format.

The Gay Bean fucked around with this message at 02:06 on Dec 14, 2016

elite_garbage_man
Apr 3, 2010
I THINK THAT "PRIMA DONNA" IS "PRE-MADONNA". I MAY BE ILLITERATE.
Obj is the simplest but it doesn't contain animation info, so if that's something you may want to add later, you'll have to switch formats, or come up with one on your own. Fbx is pretty similar to the obj format but can hold animation info.

There are a few parsers already written out there that you can use to build your arrays in main memory, and either pass them directly to your scene, or sort them based on how you want to represent/draw the triangles.

elite_garbage_man fucked around with this message at 05:45 on Dec 14, 2016

Raenir Salazar
Nov 5, 2010

College Slice
I managed to finish my final project! Here's a Demo video + Commentary.

Basically I made an OpenGL application that animates a rigged mesh using the Windows Kinect v2.

There are two outstanding issues:

1. Right now every frame is a keyframe when inserting. I don't really have it so that you can have a 30 second animation with say 3 Key frames where it interpolates. I'm seeing if I can fix it but I am getting some strange bad allocation memory errors when I try. On super simple lines of code too like aiVector3D* positionKeys = new AiVector3D[NEW_SIZE]; I don't get it, I'm investigating.

2. It only in theory works on any mesh, they have to share the same skeleton structure and names; and then their bones have to have some arbitrary orientation that matches the kinect but when I try to fix it so it matches it ruins my rigging on the pre-supplied blend files I found off of youtube from Sebastian Lague. I'd have to reskin the meshes to the fixed orientations which is a huge headache as I'm not 100% how the orientations have to be in Blender to make the Kinect happy.

quote:

- Bone direction(Y green) - always matches the skeleton.
- Normal(Z blue) - joint roll, perpendicular to the bone
- Binormal(X orange) - perpendicular to the bone and normal

Okay, so Y makes sense to me. Follows the length of the bone from joint to joint; I'm not sure if it's positive Y or negative Y but I hope it doesn't matter. In blender the default orientation following most tutorials is a positive Y orientation facing away from the parent bone.

Now "Normal" and "Binormal" don't make sense to me in any practical way. If the Bone is following my mesh's arm, is Z palm up or palm down? This is all I really care about and I don't see anything in my googling that implies what's correct. Using Blender's "Recalculate Bone Roll" with "Global Negative Y Axis" points the left arm Z's axis forward, and sometimes this gives good results?

I want my palm movement to match my palm orientation but it's hard to get this right because my mesh gets deformed editing my bones without rerigging it and it's hard to know up front if I'm right. :(

Absurd Alhazred
Mar 27, 2010

by Athanatos

Raenir Salazar posted:

I managed to finish my final project! Here's a Demo video + Commentary.

Basically I made an OpenGL application that animates a rigged mesh using the Windows Kinect v2.

There are two outstanding issues:

1. Right now every frame is a keyframe when inserting. I don't really have it so that you can have a 30 second animation with say 3 Key frames where it interpolates. I'm seeing if I can fix it but I am getting some strange bad allocation memory errors when I try. On super simple lines of code too like aiVector3D* positionKeys = new AiVector3D[NEW_SIZE]; I don't get it, I'm investigating.

2. It only in theory works on any mesh, they have to share the same skeleton structure and names; and then their bones have to have some arbitrary orientation that matches the kinect but when I try to fix it so it matches it ruins my rigging on the pre-supplied blend files I found off of youtube from Sebastian Lague. I'd have to reskin the meshes to the fixed orientations which is a huge headache as I'm not 100% how the orientations have to be in Blender to make the Kinect happy.


Okay, so Y makes sense to me. Follows the length of the bone from joint to joint; I'm not sure if it's positive Y or negative Y but I hope it doesn't matter. In blender the default orientation following most tutorials is a positive Y orientation facing away from the parent bone.

Now "Normal" and "Binormal" don't make sense to me in any practical way. If the Bone is following my mesh's arm, is Z palm up or palm down? This is all I really care about and I don't see anything in my googling that implies what's correct. Using Blender's "Recalculate Bone Roll" with "Global Negative Y Axis" points the left arm Z's axis forward, and sometimes this gives good results?

I want my palm movement to match my palm orientation but it's hard to get this right because my mesh gets deformed editing my bones without rerigging it and it's hard to know up front if I'm right. :(

I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone, and then the binormal is what you would get from (bone direction) x normal. Which should get a bit confusing with the thumb, which insists on being opposable in humans, and is also missing a joint.

As for exporting from Blender, what I've found is that it's basically impossible to closed-form resolve it, so just try out a few until you find the one that works, and then save that export configuration for future use.

Raenir Salazar
Nov 5, 2010

College Slice

Absurd Alhazred posted:

I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone,



Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Raenir Salazar posted:



Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?

Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it?

* Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Raenir Salazar posted:



Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?
The fact that you can appear to bend your arm around either axis is really a facet of the Y axis rotation of the bone above it in the hierarchy; when you rotate *that* bone 90 degrees you effectively switch the directions of the X and Z axis of the lower bone (sign notwithstanding). The body does a pretty good job of hiding this, but with your elbow out try touching your fist to your chest, watch the joint, and try to then point your lower arm upwards without the joint rotating. It doesn't work.

Raenir Salazar
Nov 5, 2010

College Slice

Absurd Alhazred posted:

Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it?

* Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.

IIRC the Kinect treats every bone the same way. It only represents one finger and the thumb though. MSDN

Though neither of my meshes have fingers iirc.

In Blender unless I have Inverse Kinematics I can rotate the bones however I want when animating; so if Z is the Bone Normal/Roll and X is the Binormal, how should the Bones be oriented in Blender with respect to their "Natural" plane of movement? Which brings me to:


roomforthetuna posted:

The fact that you can appear to bend your arm around either axis is really a facet of the Y axis rotation of the bone above it in the hierarchy; when you rotate *that* bone 90 degrees you effectively switch the directions of the X and Z axis of the lower bone (sign notwithstanding). The body does a pretty good job of hiding this, but with your elbow out try touching your fist to your chest, watch the joint, and try to then point your lower arm upwards without the joint rotating. It doesn't work.

I can see this but what about the shoulder though. It can rotate forwards (Holding my arms in front of me) and can rotate sideways, so that my arms point away from my body. Or is my collarbone doing the rotating here that causes that axis change?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Anyone know what the reasoning is behind not requiring RGB format textures other than 11_11_10 and 10_A2 to be renderable in the OpenGL standard? I've used RGB16/32F as render targets a couple of times in projects, and am really surprised to learn that vendors are not actually required to allow this. It just seems really arbitrary.

Joda fucked around with this message at 08:25 on Jan 7, 2017

pseudorandom name
May 6, 2007

As a guess: because nobody supports non-power of two pixels in hardware.

Kobata
Oct 11, 2012

Joda posted:

Anyone know what the reasoning is behind not requiring RGB format textures other than 11_11_10 and 10_A2 to be renderable in the OpenGL standard? I've used RGB16/32F as render targets a couple of times in projects, and am really surprised to learn that vendors are not actually required to allow this. It just seems really arbitrary.

It's likely because you're looking for RGB, not RGBA, because:

pseudorandom name posted:

As a guess: because nobody supports non-power of two pixels in hardware.

If you look at GL info, or even vulkan info, dumps for various GPUs you'll notice they really like 1, 2, or 4 component formats, and 3-component tends to be limited only to a few formats (like 8:8:8) that were historically common, or ones with varying bit counts that add up right (6:5:5, 11:11:10)

pseudorandom name
May 6, 2007

Don't forget that the GPU driver may just be lying to you entirely about what features the hardware supports or what the actual format is that you're using.

Here's the complete list of render target formats supported by Radeon Sea Island GPUs, for example:

FORMAT
Specifies the size of the color components and in some cases the number
format. See the COMP_SWAP field below for mappings of RGBA
(XYZW) shader pipe results to color component positions in the pixel
format. When reading from the surface, missing components in the format
will be substituted with the default value: 0.0 for RGB or 1.0 for alpha.
POSSIBLE VALUES:
00 - COLOR_INVALID: this resource is disabled
01 - COLOR_8: norm, int, srgb
02 - COLOR_16: norm, int, float
03 - COLOR_8_8: norm, int, srgb
04 - COLOR_32: int, float
05 - COLOR_16_16: norm, int, float
06 - COLOR_10_11_11: float only
07 - COLOR_11_11_10: float only
08 - COLOR_10_10_10_2: norm, int
09 - COLOR_2_10_10_10: norm, int
10 - COLOR_8_8_8_8: norm, int, srgb
11 - COLOR_32_32: int, float
12 - COLOR_16_16_16_16: norm, int, float
14 - COLOR_32_32_32_32: int, float
16 - COLOR_5_6_5: norm only
17 - COLOR_1_5_5_5: norm only, 1-bit component is always unorm
18 - COLOR_5_5_5_1: norm only, 1-bit component is always unorm
19 - COLOR_4_4_4_4: norm only
20 - COLOR_8_24: unorm depth, uint stencil
21 - COLOR_24_8: unorm depth, uint stencil
22 - COLOR_X24_8_32_FLOAT: float depth, uint stencil

NUMBER_TYPE
Specifies the numeric type of the color components.
POSSIBLE VALUES:
00 - NUMBER_UNORM: unsigned repeating fraction (urf): range [0..1], scale factor (2^n)-1
01 - NUMBER_SNORM: Microsoft-style signed rf: range [-1..1], scale factor (2^(n-1))-1
04 - NUMBER_UINT: zero-extended bit field, int in shader: not blendable or filterable
05 - NUMBER_SINT: sign-extended bit field, int in shader: not blendable or filterable
06 - NUMBER_SRGB: gamma corrected, range [0..1] (only supported for COLOR_8, COLOR_8_8 or COLOR_8_8_8_8 formats; always rounds color channels)
07 - NUMBER_FLOAT: floating point: 32-bit: IEEE float, SE8M23, bias 127, range (-2^129..2^129); 16-bit: Short float SE5M10, bias 15, range (-2^17..2^17); 11-bit: Packed float, E5M6 bias 15, range [0..2^17); 10-bit: Packed float, E5M5 bias 15, range [0..2^17)

Tres Burritos
Sep 3, 2009

I have a really stupid math problem.

I have an instanced 3D arrow that I need to rotate according to a vector (x,y,z) I pull out of a data texture in a GLSL shader.

code:
//the position for the vertices of the instanced arrow
attribute vec3 vertexPosition;

//for each instance, the UV coordinates that it needs to look at in order to get the arrow vector and position vector
attribute vec2 texUV;

//data texture that has the starting position of each arrow
uniform sampler2D arrowPositions

//data texture that has the vector of the arrow direction
uniform sampler2D arrowDirections;

main() {

	//get the "root / base / origin" of each instanced arrow
	vec3 arrowBase = texture2D(arrowPositions, texUV);

	//get the direction that the arrow should point
	vec3 arrowDirectionVector = texture2D(arrowDirections, texUV);

	//compute a rotation matrix 
	mat3 rotationMatrix = someHowComputeARotationMatrix(arrowBase, arrowDirectionVector)

	//rotate the instance vertices
	vec3 rotatedPosition = rotationMatrix * position;

	//the final position
	gl_Position = projectionMatrix * modelViewMatrix * vec4(arrowBase + rotatedPosition, 1.0);


}
	
What does the "someHowComputeARotationMatrix(arrowBase, arrowDirectionVector)" function look like? I've tried various ways of using this but it seems to not work properly for some cases. I'm missing something here, but I'm not sure what it is.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
To get the axis to rotate around, you can cross the model direction that expresses the local model "pointing" of your arrow by the direction you loaded from the texture, then normalize. That is to say, if your arrow in model space is pointing is modelDir and you want it to point in someDir, then you do normalize(cross(modelDir,someDir)) (you may have to swap around the two vectors to get the right result, though.) Then in order to get the angle to rotate by you do acos(dot(modelDir,someDir)).

E: Note, you may have to guard against the two directions being the same before normalizing, or you're can get some weird results on the normalization. A simple if(modelDir == someDir) {rotationMat = mat4();} should do.

Joda fucked around with this message at 20:57 on Feb 9, 2017

Tres Burritos
Sep 3, 2009

Holy christ thank you. I should've been committing more just so I could see where my stupid mistake was, because this looks a whole hell of a lot like what I had before. ( I may have missed the outer "normalize" on the axis calculation ... I think... )

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
What's the best way to stream texture data to the GPU in a separate thread while also performing draw calls? I have all my texture data in array textures to avoid state changes as much as possible, and I won't be reading from a region that is currently being written to while the texture is being streamed. Apparently there's a thing called a pixel buffer object that I can bind to a pointer with glMapBuffer, but I can find very little documentation about it (at least in the way of direct explanations of what GL calls to use to stream the data from a pixel buffer to a region in a texture.)

What about DXT compression for something like this?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
How fast is glCopyImageSubData()? Does it work with DXT compressed textures? I was thinking i could stream the texture into a "normal" 2D texture, then copy it over to the array texture when I'm done.

Adbot
ADBOT LOVES YOU

Jewel
May 2, 2009

Joda posted:

How fast is glCopyImageSubData()? Does it work with DXT compressed textures? I was thinking i could stream the texture into a "normal" 2D texture, then copy it over to the array texture when I'm done.

Not sure why you're simultaneously trying to avoid state changes (usually done to avoid sending a lot of data to/from the gpu and keep things fast) while also trying to modify pixels on the cpu (which is the opposite, very incredibly slow and doesn't use the gpu's power at all). Might be an XY problem here. What are you attempting to do, and why can't you do it on the fast gpu instead of doing stuff on the slow cpu?

You should be streaming all your texture data needed up front to the gpu and just binding different textures back and forth with the draw calls, perhaps rendering to texture with rendertargets/framebuffers if you need feedback loops of some kind (like, say, a post process).

The closest thing you get to draw calls in different threads is building sub-command buffers asynchronously and combining them at some synchronous time in the future and submitting it to draw.

Interested to know what your exact goal is!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply