Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm taking an introductory graphics course this semester that uses OpenGL (it uses the Angel book, which I assume some people here are familiar with.) One assignment this week is to reproduce the effects of applying a translation matrix and a rotation around the Y-axis by -120 degrees (e.g. view = RotateY(-120)*Translate(-4,-1,-1)) using a LookAt() function that takes an eye position, a point to look at and a normalized up-vector. Eye position and up-vector were both intuitive to be (4,1,1) and (0,1,0) respectively. However, with the look at point we assumed we had to rotate the it 120 degrees in the X-Z plane, so we had (4 + cos((2*M_PI)/3)), 1 , 1 + sin((2*M_PI)/3). When this wasn't right we made an implementation where we could gradually increment the angle by M_PI/6 (or 30 degrees.) As it turns out we had to rotate the camera by (5*M_PI)/6 or 150 degrees and not 120.

Could anyone shed some light on why this is the case? I can't really make it make any sense.

Joda fucked around with this message at 15:41 on Sep 18, 2014

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I just implemented a phong shader, but am not really sure why it works. I calculate the vertex vector, vertex-light vector and the vertex normal in the vertex shader (the way I implemented it, everything is in eye space,) and send them as out fields to the fragment shader. The thing that has me baffled is why it interpolates, say, the light vector properly across all fragments, while also interpolating the surface normal so it is uniform for all fragments across the same surface. As far as I can tell, it has no real way of telling when it's interpolating normals and when it's interpolating a position or direction vector that is not uniform across the surface, as all out fields are just plain vec4s. I even tested it by telling the fragment shader to draw the normals and the light vector respectively as the colour, which confirmed that it is actually interpolating like it should to produce the right result. I just don't understand how it knows the difference.

Can anyone here explain what's happening or can you point me to a resource on the subject?


Oh well, gently caress me. I just realised that the GPU draws triangles, and not entire geometric shapes all at once, so it gets three normals that extrude from the corner points of the triangle. Sorry about that.

Joda fucked around with this message at 05:56 on Oct 6, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Colonel J posted:

I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but :effort:

I, for one, am not really sure what you're asking? Are you working in different modelling applications or different graphics APIs? This distinction is fairly important. If it's APIs, I'm pretty sure OpenGL consistently has (0,1,0) as its camera up-vector and (0,0,-1) as its camera direction vector, regardless of OS or computer architecture. I'm not sure with Direct3D, but if you're working in both, you'd probably have to write two vastly different frameworks anyway. If you're trying to draw a model you've imported with your own code, it should be a simple issue of defining a (series of) rotation matrix/matrices that orients it correctly and just applying that to every model you get from that particular modelling application.

If you're talking about different modelling applications, can't you just rotate whatever you're importing until it has the orientation you want? I haven't really worked a lot in modelling, but I don't imagine that'd be very hard.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm about to finish introductory graphics and rendering (two seperate courses) and we have to do a final project. Me and my mate decided to make a joint project for both classes about diffuse reflectance in real-time using the many point lights method with imperfect shadow maps. So far I'm fairly clear on what we have to do (with the exception of some details, but I already found reading material for most of it.) One thing I'm not quite sure about is how you do hemispherical shadow/depth maps. Projecting a point unto a sphere is intuitive enough, but I can't find anywhere that explains how to map an entire triangle to a sphere, so that its edges are mapped as well before they are rasterized for the final depth map. Are you just supposed to accept the approximation offered by mapping the vertices to the sphere and not accounting for edge warping?

E: Come to thin of it, I guess we're already making a gross approximation with the ISMs, so approximating the hemisphere seems like the lesser of two evils there? At any rate, I still feel like I could use a bunch more reading, so if anyone has some links to some good resources on the subject I'd greatly appreciate them.

Joda fucked around with this message at 01:48 on Nov 27, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
For deferred shading (OpenGL,) how do you get anything other than floats between 0 and 1 into a texture? I'm currently accounting for this discrepancy in my shaders, but had to go with a solution that seems very shady, where I exploit that all vertices in my scene are between -1 and 1 on all axes. Also, are there any easy ways to avoid for artifacts from position map imprecision when you generate your lightmap? I know multisampling the light map is an option, but performance is already an issue with what I'm doing.

This is how the light map looks. The artifacts are most obvious on the small box in the front and the wall to the right. (The scene is Cornell boxes)


E: I figured out the imprecision problem by encoding my scene information in RGBA16F. I still can't figure out how to get values over 1 or under 0 into the texture, though.

Joda fucked around with this message at 16:04 on Dec 5, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Thanks. Looks like I'd unknowingly fixed that problem as well when I increased the position map precision to 16-bit floats :doh:. Figured that I had to set a state somewhere to stop GL from clamping, so never thought to check it.

Joda fucked around with this message at 03:27 on Dec 6, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
How would you go about changing the resolution of the framebuffer you're drawing to? I'm working on a project with a 500x500 viewport, but I need to encode a 10x10 texture with data that I compute on the GPU. I'm still a huge newbie when it comes to OpenGL, so I really can't figure out how to make that work. As far as I can tell, if I make a 10x10 texture and a 10x10 renderbuffer and draw the elements I want to the texture, it just takes the top-left most 10x10 pixels of the 500x500 framebuffer, in stead of drawing the entire scene in 10x10 texels as I would expect. That is to say I bind a 10x10 texture to the framebuffer, draw my elements and get the corner of the scene in stead of the whole scene.

Joda fucked around with this message at 14:48 on Dec 20, 2014

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

NorthByNorthwest posted:

In my project, I used:
code:
        glBindFramebuffer(GL_FRAMEBUFFER, framebufferID);
	glViewport(0, 0, 1024, 1024);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ..draw to 1024x1024 texture code...

	glBindFramebuffer(GL_FRAMEBUFFER,0);
	glViewport(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ...draw scene code...
Changing the viewport worked nicely for me.

That worked perfectly. Thanks a bunch :)

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Does anyone know why the Sublime Text GLSL validation plugin (which uses the ANGLE preprocessor) would say that version 330 is not supported? Is it an issue with my version of OpenGL, or can ANGLE just not be used for non-ES/WebGL code? I swear I try to Google these things, but they don't seem to be very Google friendly questions.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Isn't compiling anything low level (e.g. C/C++) for cross-platform compatibility a bad idea on a VM? Like whatever version of gcc or MSVC you're using in the VM would compile for the system it thinks it's on (i.e. whatever system the VM is emulating, including hardware level emulations,) as opposed to having a native version of Windows or Linux run on dual boot.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
You could draw world/model positions to an offscreen buffer and use glReadPixel() on that buffer using mouse x and the window height minus mouse y. As long as you remember not to use an internal format that clamps its values it should work. It's not too performant, though, but if it's 2D that shouldn't matter.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I think it may be because you're binding the UVs to location 2 in stead of 1. I'm not sure why you're doing this, since you have no normals, so you could have positions in 0 and UVs in 1. When the shader compiler reads this:

code:
in vec3 vertex_position;
in vec2 vertex_uv;
I'm pretty sure it's expecting vertex_position to be at location 0 and vertex_uv to be at position 1. That said, I'm not sure how it'd handle it when you don't actually have any data uploaded for location 1, so I may be completely wrong.

If your GPU supports shader version 330, try replacing the top of your vertex shader with this:

code:
#version 330

layout(location=0)in vec3 vertex_position;
layout(location=2)in vec2 vertex_uv;
If you do in fact upload normal data, try just adding the line

code:
in vec3 vertex_normal;
between position and uv.

If the other things fail, try putting UV coordinates at location 1 in stead of location 2.

Also, the standard nowadays is to interleave your data into a single array, so you have {position,uv,position,uv,position,uv....position,uv} rather than sending it as separate arrays. This is both faster for the GPU when you draw, and it allows you to do stuff like this:

code:
GLuint vbuffer;
glGenBuffers(1, &vbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vbuffer);//Bind vertex buffer

glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &(vertices[0]), GL_STATIC_DRAW);//Upload interleaved data

glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, position));
//Position is 3 floats, occuring every sizeof(Vertex) and offset to the position data in Vertex

glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, UV));
//UV is 2 floats, occuring every sizeof(Vertex) and offset to the UV data in Vertex
E: Note, that particular implementation (since it uses offsetof()) requires you to have a Vertex struct that contains a float position[3] and a float UV[2], which is one way to interleave the data.

Joda fucked around with this message at 17:19 on May 2, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
E: Happy you got it fixed. Nothing to see here.

Joda fucked around with this message at 01:37 on Jun 12, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Does anyone here have any experience with compiling and installing the G3D Innovation Engine on Linux?

I tried just using the python script that comes with the latest version, but it basically just stops after unzipping ffmpeg and gives me an sh error about a missing parenthesis or an expected bracket. I think it's generating faulty make files or something but I don't know enough about either to fix it. My usual MO with poo poo like this is hammer it into submission with cmake, but there's a shittonne of dependencies for everything, which is a lot of work to sort out, and I suck at linking stuff in the right order (basically I shoot randomly until something sticks.)

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
For my B.Sc. project I need to do multiple samplings of 4 separate buffers per fragment. To achieve somewhat decent frame times, I want to avoid sampling too many separate textures and cache-misses if possible. Say I want to pack 128 bits of arbitrary information into an GL_RGBA32F format, are there any guides on how to "cheat" GLSL in a way that will allow me to pack and unpack the information? An example of what I want to do:

Fragment input:
code:
[Calculate stuff]
vec4 tempOut = vec4();
tempOut += ((quarter-precision vec4 cast) normal1);
tempOut += ((quarter-precision vec4 cast) normal2) << 32;
tempOut += ((quarter precision vec4 cast) position) << 64;
tempOut += ((quarter precision vec4 cast) postion2) << 96;

fragOut = tempOut;
Texel output:
code:
in vec4 input;

void main() {
	vec4 normal1 = (quarter-precision vec4 cast) ((input << 96) >> 96);
	vec4 normal2 = (quarter-precision vec4 cast) ((input << 64) >> 96);
	etc.
}
Is it even possible to do this sort of bit-wise cast in GLSL, where it takes the bits in a sequence and re-interprets them as something else? (i.e. if I have a 32-bit unsigned int that was 2147483648 and I wanted to re-interpret it as a ivec4 it'd take the bits of 0111...111 which would make (128,256,256,256)).

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Thanks for all that! Definitely a lot to consider. Specifically, what I'm doing is global illumination with a 2-layer g-buffer, and to my understanding I'm going to be doing at least 9 samples from each buffer for each method (which will become 36 samples for radiosity and 36 for AO per fragment if I use 4 separate buffers. As opposed to 9+9) If I have the time I'll probably implement traditional separate-textures deferred rendering for comparison, since it'll make a nice addition to the report.

As for normal packing, the paper I'm following already recommends giving the radiosity algorithm two 16-bit normals in a single 32-bit word.

E: Is there anywhere I can read up on GLSL's internal formats? My understanding is that a vec4 is 4 32-bit floats, so I need to know how to convert that 32-bit float into its equivalent 8-bit float before converting it.

Joda fucked around with this message at 17:21 on Jul 7, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
If I have a 2D array texture with two layers in OpenGL, am I wrong in assuming that I would access the contents of the first layer like so?:

code:
#version 330

in vec2 UV;

uniform sampler2DArray texture_uniform;
uniform int layer;

out vec4 fragColor;

void main() {
    fragColor = texture(texture_uniform, vec3(UV,0));
}
I haven't drawn to the texture yet but I cleared the framebuffer (which has three different two-layer array textures attached to it) with glClearColor(1,0,0,1) so I would expect a red screen. I already verified that the UV coodinates and the full-screen quad work as they should.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Sex Bumbo posted:

Did you bind it correctly?

I bound it like I would any other texture.

C++ code:
void RenderEngine::renderTextureLevel(GLuint texture, unsigned short int layer) {
    tex_shader.use();

    glBindVertexArray(quad.getVAO());

    glm::mat4 pvm = glm::ortho(-width/2.0f,width/2.0f,-height/2.0f,height/2.0f,0.1f,3.0f)*
                    glm::lookAt(glm::fvec3(0,0,-1),glm::fvec3(0,0,1),glm::fvec3(0,1,0))*
                    glm::scale(glm::vec3(width,height,1));

    glUniformMatrix4fv(tex_shader.get_U_Location(Shader::U_M_PVM),1,GL_FALSE,glm::value_ptr(pvm));

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D_ARRAY, texture);
    glUniform1i(tex_shader.get_U_Location(Shader::U_I_TEXTURE),0);

    glUniform1i(tex_shader.get_U_Location(Shader::U_I_LAYER),layer);

    glDrawElements(GL_TRIANGLES,quad.getNoIndices(),GL_UNSIGNED_INT,(void*) 0);
}
Also, in case my problem is with how I'm attaching/writing to the textures, here is an example of how I create a texture:

C++ code:
glGenFramebuffers(1,&generationFBO);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER,generationFBO);

glGenTextures(1, &normals);
glBindTexture(GL_TEXTURE_2D_ARRAY,normals);

glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage3D(GL_TEXTURE_2D_ARRAY,0,GL_RG16F,width,height,2,0,GL_RG,GL_FLOAT,0);[s][/s]
And where I attach them to the draw framebuffer:

C++ code:
shader.use();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,generationFBO);

glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,depth,0);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,normals,0);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,diffColors,0);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT2,specColors,0);

GLuint buffers[3] = {GL_COLOR_ATTACHMENT0,GL_COLOR_ATTACHMENT1,GL_COLOR_ATTACHMENT2};

glDrawBuffers(3,buffers);

glClearColor(1,0,0,1);

glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
E: I should note that the texture I'm currently trying to output is the diffuse color which is RGBA8. I'm not sure what would happen if I try to output the normals, but I assume the RGBA8 internal format should work.

Joda fucked around with this message at 23:38 on Jul 26, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Joda posted:

I bound it like I would any other texture.
snip

I figured it out. I'd somehow missed that the boiler plate code I took my parameters from had mipmap in the minification filter. Changed all paramters to int and min filter to linear and at least it can draw the top layer now.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Notice, I don't know how you're handling actually drawing the grid, but this is based on the assumption that you know in the shaders what the coordinates of the current tile is.

I'm not too familiar with Unity. With pure OpenGL you can upload an array of texture samplers representing the grid (just keep in mind that there's a hard limit on how many uniforms you can upload, and it depends on hardware platform) and then based on distance to grid separators do interpolation between closest neighboring tile(s) and current tile.

A faster alternative to uploading uniforms every frame (espeially if the map layout is static) is generating a single integer texture that holds the texture ID for every tile, then upload all grid tile texture samplers in an array and using the index extracted from the ID texture. Texture generation and sampling would look something like this:

C++ code:
    glGenTextures(1, &grid);
    glBindTexture(GL_TEXTURE_2D,grid);

    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);

    glTexImage2D(GL_TEXTURE_2D,0,GL_R8UI,grid_width,grid_height,0,GL_RED,GL_UNSIGNED_INT,grid_data);
	//8-bit unsigned leaves room for 256 different kinds of tiles.
GLSL fragment shader:

code:
#version 330

uniform sampler2D grid_textures[256];
uniform sampler2DRect texture_id;

in ivec2 grid_coord;

//A bunch of other stuff goes here (e.g. local tile UVs, which you need to find distance to edge and other, obvious, reasons)

void main() {
	int centre_id = texture(texture_id,grid_coord).r;
	sampler2D centreTexture = grid_textures[centre_id];

	sampler2D neighbourTextures[8];

	int x = 0;
	for(int i = -1; i <= 1; i++) {
			for(int j = -1; j <= 1; j++) {
				if(i == 0 && j == 0)
					continue;
				int neighbour_id = texture(texture_id, grid_coord + ivec2(i,j)).r;
				neighbourTextures[x] = grid_textures[neighbour_id];
				x++;
			}
	}

	//Do look-ups, find distance to neighbours, interpolate as needed etc. etc. etc.
}
Note: you probably don't want to upload 256 texture samplers all at once. That said I've never tried it, but it just seems like a really bad idea.

You also need to make sure that grid_textures are all uploaded in the same order every time (obviously). Also, the above assumes square tiles. Refurbishing it for hexagons or whatever shouldn't be too hard.

I hope that was at least somewhat helpful.

E: I am assuming here that you are drawing the entire grid in a single draw call. Also, there's a million different ways to solve these kinds of problems, this is just how I'd probably do it.

E3: If you're asking how to do the actual interpolation between neighbours, it's just a question of finding the function that looks the best. You probably want something inversely exponential based on distance from dividing line. (i.e. so it goes very fast from .5 neighbour .5 self at edge to 1 self 0 neighbour approaching the center.)

Joda fucked around with this message at 17:50 on Aug 15, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I think to properly explain my thought process I need to dedicate an effort post to it, and so I shall. I just need to know exactly how you're drawing the grid. Like is every tile a mesh to itself, is every junction a vertex or something else entirely?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I misunderstood your question I think, what I responded was how I'd 1) store the grid layout in a way so the GPU had the data ready (sending stuff from the CPU is expensive, hence the use of an int texture for this,) but that was assuming you generated it all in a single go and 2) how to find adjacent tiles based on this texture. Neither of which answered your question about how to smooth out edges. It's my bad, so just ignore my original post. Anyway, I promised an effort post, so I'll do my best anyway.

Disclaimer: If you're unfamiliar with shaders, fragment is, for all intents and purposes, just another way of saying pixel. I only realised this might be a problem after finishing.

Stuff I assume the shader has available: UV-coordinates, texture samplers for the current tile as well as all relevant adjacent ones.

Finding values for interpolation (If you already have your interpolation values skip this step)

With interpolations we typically want a value between 0 and 1 that determines how much of each texture we want to contribute (as you probably already know.) So the first step is determining how close is the current fragment (or pixel if you prefer) to each of the six edges (or, in other words, how long is the vector from our current pixel projected unto each of the six lines spanning the edges.) For this explanation I'm going to assume your texture is a regular rectangle and is mapped as such, as demonstrated here:



This will allow me to treat UV coordinates like they were in a flat plane, and use them to determine the projection lenghts.

Still referring to the drawing above, we can easily determine the interpolation values for the ED and AB line segments. For AB it's quite simply p.V/f.V, where p is the current point in the texture (i.e. the fragment UV location.) and F is point F in the picture (since F is in the vertical center of the hexagon.) For AD it's abs(p.V - E.V)/F.V, where abs = absolute value.

For the sloped sides it's slightly more complicated. Again I will use an illustration:



Here we use the projection distance from a point to a given line defined by a starting point and a normalized vector (I could put the formula for finding the distance here, but it's much easier to just look up and the notation here would be awful.) To determine this distance we need to define a line in the plane that runs directly through the sloped line segment of the dividing line, and compare it to the point N, which is the fragment UV-coordinates.The line running through BC would be defined by C + normalize(C - B) * t, where t is the independent variable. You can use this in the formula for distance between line and point (remember to use the vector-based one, since that is how our line is defined.) Finally divide the number this produces by F.V. Repeat for all segments AF, FE and DC and we have the four remaining interpolation values and we can move on to the actual interpolation.

Obviously, you should store the values in a way so you know which texture corresponds to which edge (based on the neighbour there.)

E: The entire above step might produce some weird results if your textures are not defined like I assume, let me know if so and I'll explain how to do it with vertex positions if you want. Basically the idea is that in stead of using UV-coordinates you use vertex X and Z coordinates (in model-space) and replace the method for AB and ED with the one used for the other four. This also means that the point N is replaced by the model-space coordinate of the fragment while the UV-coordinates are still needed for the next section, so you need both.

Doing the interpolation

In this step I am assuming that tiles of the same kind have textures that tile without disruption. This allows us to use the same UV-coordinate for the current tile and its neighbour tile.

We now have 6 values to do our interpolation. Since we used the centre of the figure to determine distance, some of them are going to be over 1, but never more than three (which is just a fact of geometry, assuming a regular polygon.) The ones that are over 1 we have no use for, since we only want to interpolate between the current tile and the two tiles closest to the fragment. As such, sort out all values except the two smallest values (our values go from 0 at the edge to 1 halfway through the hexagon.)

We could just pick the smallest of the two values we have and be done with it, but that would create a disruption around the corners of the polygon where edges meet, so we're gonna start by interpolating linearly between those two values to get a balance between the two to use for the gradient when do do the final step.



I marked the vector between the closest edge and the fragment with min and the farthest of the two and the fragment with max. To interpolate linearly between the two values we do the following: tempColor = (min/(min + max))*sample(BC,N) + (1 - min/(min + max)) * sample(AB,N) where sample = a sample of the texture to corresponding to an edge. I think GLSL has a lerp function you can use if you want.

As the final step we have to interpolate between the value we just found and the texture of the current tile. For this we just use the smallest of the two values we have left. Like you noticed, just doing a linear interpolation is probably a bad idea, since it will make the entire tile a gradient. To alleviate this we can add an exponent to the interpolation value so that it has more influence at the edge, but much less at the centre. We're going to need an intermediary value because we also want the edge value to be half when min = 0 (since neighbour tiles are all interpolating too.) Based on this we figure out the final interpolation value like so: interp = 0.5*(-(min^x) + 1) where x is an arbitrary exponent. I invert min to go from centre to edge, because it's convenient. We now use this interpolation value to find the final color for the fragment: color = interp * tempColor + (1 - interp) * sample(main_tex,N)

The higher you make the exponenent the more pronounced borders are going to be.

You can also use this to make beaches and such, by just uploading a beach texture as the neighbour when water goes to land or land goes to water.

I hope that was helpful, let me know if you have any questions.

Joda fucked around with this message at 17:27 on Aug 22, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
No it's just N, or the point in the plane. My bad.

E: Another correction: The line running through BC is defined by C + normalize(C - B) * t, not C + normalize(C - B). If you're unfamiliar with geometry the t should help you identify which values to use in the equation.

Joda fucked around with this message at 21:11 on Aug 17, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Depends on whether or not you can override the automatic texture handling I guess. I wrote it to be used in a fragment shader, but if you can do it in a surface shader it shouldn't really matter. Like I said I'm not that familiar with Unity, so I can't answer a lot of specific questions about how it works.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
If your texture is mapped like that, you can probably just use the UV coordinates as if they are points in a plane, without needing vertex positions at all. What's most important is that it maps without stretching. You should probably also use the latter of the two methods regardless, bc there's a million different ways you can rotate the hexagon part of the textyre. Unity combining all your meshes into one makes sense, but it does mean you cannot use model-space vertex coordinates.

Like I said I don't know how Unity handles shaders, but you should have UV available as a pipeline input. As for ABCDEF UVs, they're actually fixed values so you can define them as constants in the shader itself based on how your texture maps. The centre point is the distance halfway from one edge to the parallel one across from it which is also a constant; it doesn't matter which pair.

E: I just realised a problem with the interpolation between the closest and next closest edges. I'll think of a better way, just keep it in mind for now.

E2: I think it's fixed by doing this tempColor = (min/(min + max))*sample(BC,N) + (1 - min/(min + max)) * sample(AB,N) in stead of tempColor = (min/max)*sample(BC,N) + (1 - min/max) * sample(AB,N) . That way the values will count indentically when they are equal, but when exactly at the edge AB (or equivalent) will still count fully.

Joda fucked around with this message at 07:55 on Aug 19, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Let me just preface by saying I like Xerophyte's idea a lot better than my own, since it's simpler and it uses a whole lot less computing power in the fragment shader.

Raenir Salazar posted:

Right right, then to Joda I have another and possibly silly question by p.V do you mean p's V of UV coordinates or do you mean p dot V meaning the dot product of p and V? :ohdear:

It's the V-coordinate of p. If it was the dot product I would have said dot(p,V). ANd again, p is just N in the next picture. I could've made it clearer though.


Raenir Salazar posted:

p of AB is 0.75/0.5=1.5
p of ED is abs(0.75-0.835)/0.5=0.17

I'm regretting slightly introducing p at all. If I understand you correctly these are the two projection distances from the AB and ED edges to your point. p is the fixed point in the plane that you are currently investigating (aka N in the second picture.) Also, F.V is not half the height of the picture, assuming it maps exactly like in my illustrations. If 0.835 is E's V-coordinate, then F's V-coordinate would be half of that.

Raenir Salazar posted:

(If I recall there are many many ways of getting the norm, but the standard norm is |x|= sqrt(x_1^2+....+x_n^2)

GLSL has a normalize() function. I haven't normalized by hand since high-school maths, but the formula is normalize(x) = x/length(x), or more explicitly normalize(x) = x/sqrt(x_0^2 + x_1^2 ... + x_n^2), where the division means each value of x is divided by the denominator.


Raenir Salazar posted:

p of FE (since it's closer) is: E+normalize(E-F) * t

(E seems to be roughly 1/4 by eyeballing it, I can look it up later)

(If I recall there are many many ways of getting the norm, but the standard norm is |x|= sqrt(x_1^2+....+x_n^2)

It also isn't clear to be if ABCDEF are vectors here or points and if points assuming the answer from before, I assume is E.V, F.V etc?

normalize(E-F)

=(0.25,0.835)-(0,0.5)=(0.25,0.235)/sqrt(0.25^2+0.235^2)=(0.25,0.235)/0.118=(2.119,1.991)
(ABCDEF are vectors argument seems stronger now, going to assume so)

E+normalize(E-F)
E+(0.25,0.835)+(2.119,1.991)=2.3~,2.7~

Or was E+normalize(E-F) meant to be Dist(N_EF)+normalize(E-F)? And by t being the independent variable, do you mean it's the current p/N?

It's been a while since I've had linear algebra.

1: ABCDEF are all points (they're also vectors, but for all intents and purposes, just consider them points.) To get the vector v from point x to point y we do v = y - x.

2: A line is defined by vectors as l = p_0 + n * t, where p_0 is the starting point, n is a normalized direction and t is the independent variable. What an independent variable means (in this case at least) is that it can be any arbitrary, real value. We don't need t. We use the formula E+normalize(E-F) * t only abstractly to find the distance from the line it defines to N (the point we're currently investigating.) E in this case is p_0 and normalize(E-F) is the normalized direction of the line. To get the distance we take the values from this line equation and use them in the vector formula for smallest distance from point to line, which you can find here: https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line#Vector_formulation

Taking the distance of the result of the last equation in that section should give you the projection distance.

Joda fucked around with this message at 05:08 on Aug 20, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Raenir Salazar posted:

I like eventually working on implementing both ideas as a broader lesson in working with shaders and gaining familiarity.


My eyes are a little uncooperative tonight :D Okay so F.V being roughly E.V/2, but otherwise my understanding there is correct?


This I know, but I'm attempting to do this by hand for one pixel to see if I understand the logic. I wouldn't write a matrix multiplication function without also doing it correctly once by hand as well.

Sorry for not editing my post sooner, I did it as soon as I realized my goof in reading your post and adjusted accordingly, I made you write out a lot for no reason. :(

Is my understanding in the latest revision of my post correct?

Unity's HLSL does have Lerp, it's Lerp(Color1, Color2, 0to1 blendvalue).

Thank you very much for your patience.

At a first glance it looks right. I get a bit confused with the language of norm/normal sometimes. Just remember to take the length of the projection vector to get a single value. That is to say
||(E-p)-((E-p)dot normalize(E-F))normalize(E-F)||
means length((E-p)-((E-p)dot normalize(E-F))normalize(E-F))

(This prob won't be a problem, but the wiki article confused me for a second.)

And no problem.

Joda fucked around with this message at 05:36 on Aug 20, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
It's very likely I completely hosed up somewhere. One think I noticed is that you use min/max for the closest and next closest edge interpolation. I corrected that to min/(min + max) in a later post, I just forgot to edit it in my main post.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
E: ^It's not necessarily slower since shader cores don't work that well with if-else statements and branching. There's the distinct possibility that it does every single sample regardless of whether or not it actually enters that conditional. That said, I think the best solution here would be a sampler2DArray, and then upload indices in stead of sampler uniforms.

Raenir Salazar posted:

Yup, fixed that and now closest/next closest edge seems correct now, but the main edge and the center seems still reversed:

And some are weirder than others.
TL:

BL: (Fixed this one!)


BL: (after ebing fixed)


L:

R:

TR:

BR:


edit:
For some reason switch around FE or EF seems to result in the shapes generally changing; BL swaps which side it's on but stays the same, the rest are still in the correct location but swap around each others "shapes".

Edit 2: Fixed bottom left.

Fixing bottom left seems to have fixed the general shape of all the other sides but now we get this shape if each edge is unique:



So what remains now is two general issues; it seems like the interpolations don't interpolate with the interpolations.

Shown here:

And more precisely here:


And as generally visible even before the exponential factor is added (currently commented out) the edges seem to be faded but get stronger going further in. Which is the opposite of what we want.

In fact I strongly suspect if we solved the inverse problem here then it would also fix out interpolation of our interpolation problems.

Edit 3:
When I changed the interp value from .5 to .8 I actually get an effect closer to what we want, but is still an issue in that specifically the problem now is the edges being too strong vs. the original edge.


The TL edge as an example, despite it's edge supposed to being rock, it shares some of the surrounding wood textures; but the "sides" of the rock are too strong and don't interpolate well with the wood. If this were reversed I think it would work.

Oh drat, yeah. That's my mistake. What's happening (as far as I can tell) is that the max side doesn't reach 0 before it changes to something else, which creates the hard edges halfway along.

Joda fucked around with this message at 08:40 on Aug 23, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Raenir Salazar posted:

Joda do you know anything I could try to fix the issue? I've done some trial and error but no luck; is the problem with the value we're using for interpolation or would it be with how we're calculating distance?

Well there needs to be an interpolation that ensures that before it changes from one side to the other, the first side's interpolation value needs to be 0. I'm trying to set up something so I can do some testing myself. As I've now realised making shaders without actually implementing them is a bad idea. I'll get back to you sometime later today when I'm done.

E: Sampling three edges could also be an idea, but that does mean you're gonna get contributions from edges that might be too far away to make sense.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Just to let you know I'm still working on it. I had some toochain issues involving Avast! antivirus, so I got delayed a bit. I made the footwork and am working on getting a reasonable interpolation set up. I'm using a geometry shader to generate the hex because I wanted to set up something fast, and I decided to use a hex-coordinate system and let the rendering pipeline handle the hex-coord interpolation between vertices. Important to note: I inverted all distances from my original post so it now goes from 0 at the center to abs(X) = 1 at an edge. To do this inversion you just need to do abs(dist - 1) with the distances you have already.

Fragment shader:
code:
#version 420

in vec3 hex_coord;

out vec4 fragColor;

struct interp_info {
    float value;
    int index;
};

void main() {
    vec4 mainColor = vec4(1);

    vec4 sideColors[6] = vec4[6](
            vec4(1,0,0,1),    //AF
            vec4(0,1,0,1),    //AB
            vec4(0,0,1,1),    //BC
            vec4(1,0,0,1),    //CD
            vec4(0,1,0,1),    //DE
            vec4(0,0,1,1)     //EF
    );


    interp_info sig_sides[2] = {interp_info(0,0),interp_info(0,0)};

    for(int i = 0; i < 3; i++) {
        if(abs(hex_coord[i]) > sig_sides[0].value) {
            sig_sides[1] = sig_sides[0];
            sig_sides[0] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        } else if(abs(hex_coord[i]) > sig_sides[1].value) {
            sig_sides[1] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        }
    }
/*
    vec4 tempColor = mix(
        sideColors[sig_sides[0].index],
        sideColors[sig_sides[1].index],
        )
    );
*/
    vec4 tempColor = sideColors[sig_sides[0].index];

    float interp = pow(sig_sides[0].value,1);

    fragColor = mix(mainColor,tempColor,interp);
}
It produces the following image:



(I like to work with pure colours, because it makes it easier to see what's going on.)

I'll host the geometry shader offsite here if you're curious. Honestly, it's really only a way for me to be lazy.

I'll post again when I've fixed the broken rear end interpolation value, just wanted to let you know I'm trying to work it out

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I wanted to find a solution that only required the two closest sides, because I thought that would give us the best result, but I can't make it make sense in my head. In stead I settled on a three-way weighted average of the three closest sides. We use an exponent for this average since it means that the closest edge gives much more contribution than the others, while still being equal to next-closest at a dividing line. Remember that I inverted the distances to go from center to edge.

Fragment shader:

code:
#version 420

in vec3 hex_coord;

out vec4 fragColor;

struct interp_info {
    float value;
    int index;
};

void main() {
    vec4 mainColor = vec4(1);

    vec4 sideColors[6] = vec4[6](
            vec4(1,0,0,1),    //AF
            vec4(0,1,0,1),    //AB
            vec4(0,0,1,1),    //BC
            vec4(1,0,0,1),    //CD
            vec4(0,1,0,1),    //DE
            vec4(0,0,1,1)     //EF
    );


    interp_info sig_sides[3] = {interp_info(0,0),interp_info(0,0),interp_info(0,0)};

    for(int i = 0; i < 3; i++) {
        if(abs(hex_coord[i]) > sig_sides[0].value) {
            sig_sides[2] = sig_sides[1];
            sig_sides[1] = sig_sides[0];
            sig_sides[0] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        } else if(abs(hex_coord[i]) > sig_sides[1].value) {
            sig_sides[2] = sig_sides[1];
            sig_sides[1] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        } else if(abs(hex_coord[i]) > sig_sides[2].value) {
            sig_sides[2] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        }
    }

    float expon = 5.0f;

    float interp1 = pow(sig_sides[0].value,expon);
    vec4 col1 = sideColors[sig_sides[0].index];

    float interp2 = pow(sig_sides[1].value,expon);
    vec4 col2 = sideColors[sig_sides[1].index];

    float interp3 = pow(sig_sides[2].value,expon);
    vec4 col3 = sideColors[sig_sides[2].index];

    vec4 tempColor = (interp1*col1 + interp2*col2 + interp3*col3)/
                (interp1 + interp2 + interp3);

    float interp = pow(sig_sides[0].value,10);

    fragColor = mix(mainColor,tempColor,interp);
}
The array sig_sides contain information on the three most significant sides (that is to say sig_sides[0] is the most significant and so on.) value contains the distance from center to edge (between 0 and 1,) and index is the corresponding index in the neighbour color array. I marked in comments what indices of the colour array correspond to the color of which edge. It produces the following image.

Joda fucked around with this message at 19:58 on Aug 26, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Raenir Salazar posted:

Thank you very much Joda, I'll give it a crack when I get home later today.

I just have some questions so I actually understand the math/logic behind what's going on here.

code:
in vec3 hex_coord;
By Hex coordinate, you mean the hex's position as the whole on the board, but you only have one hex though right and you just have arbitrary colour values for the edges?


No, it's an interpolation value expressing the distance from the center to one of the edges, as Xerophyte suggested. For instance, hex_coord.x goes from 1 right up against AF to -1 right up against CD. If I draw hex_coord.x as the red channel, the image looks like this:



In other words, the absolute of the value is, depending on the value's sign, the inverted projection distance from the edge to the current fragment. I define the hex_coord values in the geometry shader I linked, and let the rendering pipeline interpolate them. You already have these distances, all you need to do is invert them.


Raenir Salazar posted:

Are these the sides closest to our current pixel?

Yes.


Raenir Salazar posted:

code:
interp_info sig_sides[3] = {interp_info(0,0),interp_info(0,0),interp_info(0,0)};
Our interp_info is a data type from which we define sig_sides (as in the most significant sides correct?) with initial values of 0,0?

Yes. By initialising the values to 0, we know our own values will overwrite them.


Raenir Salazar posted:

code:
    for(int i = 0; i < 3; i++) {
        if(abs(hex_coord[i]) > sig_sides[0].value) {
            sig_sides[2] = sig_sides[1];
            sig_sides[1] = sig_sides[0];
            sig_sides[0] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        } else if(abs(hex_coord[i]) > sig_sides[1].value) {
            sig_sides[2] = sig_sides[1];
            sig_sides[1] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        } else if(abs(hex_coord[i]) > sig_sides[2].value) {
            sig_sides[2] = (hex_coord[i] < 0) ?
                                        interp_info(abs(hex_coord[i]),i) :
                                        interp_info(abs(hex_coord[i]), i  + 3);
        }
    }
Here is where things get confusion for me, and I suspect it's because I'm not sure where the hex coordinate system comes into play, I assume in the example picture the hex coordinate is 0,0,0 for every pixel? Can you step through for me the logic?

It's an ugly sorting algorithm that sorts the values in hex_coord from biggest abs to smallest, and based on the sign of the value it picks one end (i) or the other end (i + 3). I made the color array in such a way that opposite sides are three apart. If you already have a way of finding the three most significant edges and their corresponding textures, just keep using that.

Joda fucked around with this message at 23:48 on Aug 26, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
How does (||(E-p)-((E-p)dot normalize(E-F))normalize(E-F)||) / F.V look when you print it out as the colour vs abs(((Length((E-p)-((E-p)dot normalize(E-F))normalize(E-F))) / F.V) - 1);?

And yes to your edit.

E: Also, you should seriously consider just uploading hex coordinates as vertex attributes. See my geometry shader to see what vertices should have what values (look for X_Hex). It's easier, takes less computational power and we avoid the hard-to-read distance formula.

Joda fucked around with this message at 02:17 on Aug 27, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Hm. I'm not sure what's going on here. At first glance it looks like it finds the right values. One thing though, (interp1*lookup[b[0]] + interp2*lookup[b[1]] + interp2*lookup[b[2]]) that last interp2 should be interp3, but I don't know how much difference that's gonna make.

E: Also, sort them by smallest value before you invert them, if you're not already. That is to say, first element in the array is the smallest distance, then invert all three.

Joda fucked around with this message at 02:31 on Aug 27, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Have you tried doing the thing I edited in about sorting first then inverting?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
That looks a lot more reasonable, you can probably increase the exponents again now.

Did you sort so the smallest value is the most significant?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Great! Sorry that took like a week and more than a page, but at least we got there :D. Next time I'll probably test out my solution before posting.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I was actually planning to implement that int texture containing indices for a texture array I originally posted about (which is why my geom shader has those commented out grid_x grid_y values) since I got curious how it'll work and it'll make a nice addition to my portfolio for when I have to apply for my master's. You're welcome to ask any questions that come up in regards to shaders, conceptual or low-level stuff (either here or PM), or to compare notes.

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm implementing the index thing atm, but I have a quick addendum to the interpolation stuff. The values as we ended up with gave some very sharp edges around the corners once I got a grid working (and I have no idea why,) so I changed it to be a four way weighted average with the main colour always having weight 1.

code:
float expon = 5.0f;

float interp1 = pow(sig_sides[0].value,expon);
vec4 col1 = sideColors[sig_sides[0].index];

float interp2 = pow(sig_sides[1].value,expon);
vec4 col2 = sideColors[sig_sides[1].index];

float interp3 = pow(sig_sides[2].value,expon);
vec4 col3 = sideColors[sig_sides[2].index];

fragColor = (mainColor + interp1*col1 + interp2*col2 + interp3*col3)/
            (1 + interp1 + interp2 + interp3);
(The grid is currently just random colours.)

Before:

After:


If you want sharper edges just increase expon like before.

Joda fucked around with this message at 06:22 on Aug 28, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply