|
Is it legal to alias the outputs of a compute shader as a different type in another shader? Like, if I have a compute shader output to a structured buffer that it thinks contains this: code:
code:
|
# ? Apr 27, 2015 07:46 |
|
|
# ? Apr 26, 2024 08:04 |
|
I'm playing around with OpenGL for the first time in a while and I'm having some troubles getting anything except the basic vertex positions to work properly. For example, I'm trying to draw a simple textured quad but the quad only gets colored by the top left pixel of the texture so it seems like the UV coordinates are not uploaded properly, or something. I have a little model class that wraps the VAO and VBO's and in the constructor I have something like this: C++ code:
C++ code:
Shaders C++ code:
I'm feeling like I'm missing something fundamental here, but googling only gives me results from people forgetting to set the UV coords altogether, or they're working in immediate mode and forgetting to call glEnable, stuff like that. netcat fucked around with this message at 16:29 on May 2, 2015 |
# ? May 2, 2015 16:26 |
I think it may be because you're binding the UVs to location 2 in stead of 1. I'm not sure why you're doing this, since you have no normals, so you could have positions in 0 and UVs in 1. When the shader compiler reads this:code:
If your GPU supports shader version 330, try replacing the top of your vertex shader with this: code:
code:
If the other things fail, try putting UV coordinates at location 1 in stead of location 2. Also, the standard nowadays is to interleave your data into a single array, so you have {position,uv,position,uv,position,uv....position,uv} rather than sending it as separate arrays. This is both faster for the GPU when you draw, and it allows you to do stuff like this: code:
Joda fucked around with this message at 17:19 on May 2, 2015 |
|
# ? May 2, 2015 16:45 |
|
Joda posted:I think it may be because you're binding the UVs to location 2 in stead of 1. I'm not sure why you're doing this, since you have no normals, so you could have positions in 0 and UVs in 1. When the shader compiler reads this: Ah, that worked, thanks! I thought that since I used the same value in glBindAttribLocation, the order wouldn't matter. The reason I had 2 for UV was because I wanted to use different locations for color and uvs, which in retrospect doesn't make much sense. I'll have to look into using a single array too then, I guess.
|
# ? May 2, 2015 17:19 |
|
You should use glGetAttribLocation to know what location to bind to.
|
# ? May 3, 2015 03:15 |
|
Edit: Do yourself a favor and skip this whole post I've been tearing my hair out over my normals, I clearly don't understand what the gently caress I'm doing... I'm drawing a terrain mesh with the normal data as a separate texture. The reason for this is because I want the normal data to have a higher resolution than the terrain vertices. Here is the terrain mesh, with normal data: Don't mind the seam, this is where two terrain meshes (and hence 2 normal maps) meet, I'll get around to fixing that next... The pixel shader for this is simply: code:
The normals are encoded in C# like this: code:
This obviously gives me slightly incorrect results. Modifying the shader to add some sort of directional light, from straight above, and decoded normals: code:
the flat areas are somewhat off-white. Putting in a light direction of something like 0.25, 0.25, -1 gives me pretty close to pure white for the flat areas, which makes sense given how slightly off-color the normal map texture is. The surface format of the texture is 32-bit RGBA (Color). Does anyone have any idea why this precision is lost? I'm actually trying to implement some more advanced features into my graphics engine but it's really hard to debug stuff like SSAO when my foundation isn't rock-solid, and even basic stuff like my normals are wrong.. Edit: Lmao ofcourse 5 minutes after posting this I realize I'm normalizing the encoded normals. I've been trying to debug this for like a week, folks Mata fucked around with this message at 12:39 on May 23, 2015 |
# ? May 23, 2015 12:34 |
|
OneEightHundred posted:Is it legal to alias the outputs of a compute shader as a different type in another shader? You almost certainly answered your question by now, but the answer is "100% yes".
|
# ? May 30, 2015 15:59 |
|
I'm trying to follow this tutorial, which has this example code but in F# using these bindings to GLFW/OpenGL. My code is here. It's very simple, just drawing a flat triangle on the z = 0 plane. With the orthographic view, I can see the triangle. When I switch to the perspective view, I can't. I've tried a million different parameters for the vectors in lookAt, and for createPerspectiveFoV. The key part of the code is: code:
gonadic io fucked around with this message at 22:53 on Jun 11, 2015 |
# ? Jun 11, 2015 22:48 |
|
I don't know f# but out of curiosity why do you do "float32 4 / float32 3" instead of just 4.0f / 3.0f? Can you post the value of the perspective matrix? Maybe try flipping the winding of the triangle. Also try reversing the order of your matrix multiplications.
|
# ? Jun 12, 2015 00:05 |
|
Sex Bumbo posted:I don't know f# but out of curiosity why do you do "float32 4 / float32 3" instead of just 4.0f / 3.0f? No idea, just a brain fart which I've changed now. This is really weird, swapping to 'model * view * projection' made it work perfectly, it looks identical to the one in the tutorial. And I've checked, matrix multiplication isn't implemented backwards. Any idea why this might be? e: I mean it's not the end of the world since I suspect that this tutorial is going to introduce me to glMatrixMode soon enough anyway but it's still puzzling. gonadic io fucked around with this message at 01:47 on Jun 12, 2015 |
# ? Jun 12, 2015 01:30 |
E: Happy you got it fixed. Nothing to see here.
Joda fucked around with this message at 01:37 on Jun 12, 2015 |
|
# ? Jun 12, 2015 01:34 |
|
gonadic io posted:No idea, just a brain fart which I've changed now. The multiplication is going to depend on the implementation of the matrix functions you're using and also your vertex shader. I thought that might be the issue because your orthogonal projection is just a scale in this case and would work either way. The perspective projection wouldn't though.
|
# ? Jun 12, 2015 04:04 |
Does anyone here have any experience with compiling and installing the G3D Innovation Engine on Linux? I tried just using the python script that comes with the latest version, but it basically just stops after unzipping ffmpeg and gives me an sh error about a missing parenthesis or an expected bracket. I think it's generating faulty make files or something but I don't know enough about either to fix it. My usual MO with poo poo like this is hammer it into submission with cmake, but there's a shittonne of dependencies for everything, which is a lot of work to sort out, and I suck at linking stuff in the right order (basically I shoot randomly until something sticks.)
|
|
# ? Jun 27, 2015 12:53 |
|
How does photon mapping typically deal with off-manifold areas in the final gather phase? Like, if you're sampling a point that's at the edge of a surface, half of the sample area is going to be off the surface where photons can't actually hit, which would make the result artificially dark. Do you have to compute it as the area as that's actually hittable (i.e. by clipping the sample area against the manifold), or is there some other strategy?
|
# ? Jun 28, 2015 00:07 |
|
OneEightHundred posted:How does photon mapping typically deal with off-manifold areas in the final gather phase? Like, if you're sampling a point that's at the edge of a surface, half of the sample area is going to be off the surface where photons can't actually hit, which would make the result artificially dark. Do you have to compute it as the area as that's actually hittable (i.e. by clipping the sample area against the manifold), or is there some other strategy? Boundary bias is an issue, yes. There are two main categories of approaches for reducing it that I am aware of. * Rescale your density estimation volume to discard regions outside of the domain, as you suggest. * Reproject regions outside of the domain to regions inside the domain. Either requires some way to estimate the intersection of the photon domain and the gather region which is of course a problem. A generic but fairly costly solution is to use the convex hull of the set of nearest photons for the density estimate, instead of some simpler bounding volume. E: Wann Jensen's SIGGRAPH 2002 course on photon mapping covers it, briefly. It's a good read, if a bit out of date now I suppose. It's all vertex merging nowadays thanks to the VCM paper, even though that's exactly the same thing. Xerophyte fucked around with this message at 03:07 on Jun 28, 2015 |
# ? Jun 28, 2015 02:46 |
For my B.Sc. project I need to do multiple samplings of 4 separate buffers per fragment. To achieve somewhat decent frame times, I want to avoid sampling too many separate textures and cache-misses if possible. Say I want to pack 128 bits of arbitrary information into an GL_RGBA32F format, are there any guides on how to "cheat" GLSL in a way that will allow me to pack and unpack the information? An example of what I want to do: Fragment input: code:
code:
|
|
# ? Jul 7, 2015 03:07 |
|
Joda posted:For my B.Sc. project I need to do multiple samplings of 4 separate buffers per fragment. To achieve somewhat decent frame times, I want to avoid sampling too many separate textures and cache-misses if possible. Say I want to pack 128 bits of arbitrary information into an GL_RGBA32F format, are there any guides on how to "cheat" GLSL in a way that will allow me to pack and unpack the information? An example of what I want to do: 1) Why not use GL_RGBA32UI instead? You should be able to do your casting natively there -- pack the normals/positions into four 32-bit words however you see fit. 2) If you're converting your vectors/positions to signed normals (i.e. a uniform mapping to bits covering the range [0, 1]) you won't need these, but it might be worth looking at the floatBitsToUint/UintBitsToFloat GLSL functions. 3) Be sure what you're doing is actually helping. Texture caches on modern GPUs are really just memory maps, so having 4 separate RGBA8 textures being loaded sequentially will not necessarily be any less efficient than 1 RGBA32 read. With the 128-bit format you're packing your 'structure' adjacently in memory, but remember that each thread/pixel in the GPU isn't being executed sequentially -- it's running in parallel as part of a SIMD group (a 32-thread Warp/64-thread Wavefront depending on if you're talking NV or AMD). Conceptually, the code might be: code:
code:
Now sampling multiple textures MIGHT be a problem for you. First, the texture unit itself has a limit on how much throughput it can handle in terms of requests; however, if you're just doing four taps you shouldn't be hitting this -- it's usually more of an issue for things like soft shadowing shaders that are doing 13+ taps per pixel. You might also have a problem if your texture reads are dependent (that is, the coordinate for one read is derived from the result of another); in that case you have to serialize your reads, and depending on what other work your shader is doing you may not be able to effectively hide the latency of the texture access with arithmetic instructions. From what it looks like though this isn't what you are doing. It may all be a wash anyways if all of your math ops require all the values to be read before they can do any work. The memory behavior will still be slightly different but you're still pulling the same total bandwidth so there will be a lot more variability based on the hardware's specific cacheing strategy. Still, if there are ANY operations you're doing that only need subset of the results, the compiler should be able to reorder the shader so that they are operating while the rest of the data is loading, leading to better total throughput by overlapping latencies. Finally, all of this is not to say this isn't a clever idea. There's definitely value in it for something like G-Buffer packing in deferred shaders. For example, a Normal doesn't really need all four channels in an RGBA texture -- you could choose to pack it as 3xFloat16 values, or even two (deriving the third from the fact that the normal is, well, normalized). If you can use bit packing to compress below what you'd be using in separate textures then the gains might start to appear. Just keep in mind that the packing/unpacking instructions aren't free either, so if you're at all instruction-bound then it may hurt more than it helps.
|
# ? Jul 7, 2015 12:55 |
Thanks for all that! Definitely a lot to consider. Specifically, what I'm doing is global illumination with a 2-layer g-buffer, and to my understanding I'm going to be doing at least 9 samples from each buffer for each method (which will become 36 samples for radiosity and 36 for AO per fragment if I use 4 separate buffers. As opposed to 9+9) If I have the time I'll probably implement traditional separate-textures deferred rendering for comparison, since it'll make a nice addition to the report. As for normal packing, the paper I'm following already recommends giving the radiosity algorithm two 16-bit normals in a single 32-bit word. E: Is there anywhere I can read up on GLSL's internal formats? My understanding is that a vec4 is 4 32-bit floats, so I need to know how to convert that 32-bit float into its equivalent 8-bit float before converting it. Joda fucked around with this message at 17:21 on Jul 7, 2015 |
|
# ? Jul 7, 2015 16:55 |
|
If you use an isampler instead of a sampler in your shader, you'll get an ivec4 back from the texture function when you go to sample, so everything is integers.
|
# ? Jul 7, 2015 17:29 |
|
What's the latest craze in shadow mapping? I screwed myself royally trying to mix EVSM and SpeedTree back in the day, and I've kind of kept a distance since. Some people say floating point precision's to blame; But I know (na na na na na) It's all Nyquist's fault
|
# ? Jul 7, 2015 17:58 |
|
With the usual caveat that I don't work with this weird raster stuff and my approach to shadows is "shoot more rays": Moment Shadow Mapping is the latest VSM/ESM, lets-filter-our-problems-away approach. It seems neat, I haven't actually implemented it. Here's some comparisons which show that, eh, it's about as good as the rest of them. I imagine trees will still suck. If you rightly feel that dynamic geometry is for chumps and really just want a better light map then you may want to pack all your shadow map data in an SVO, merge all your common subvoxels and end up with the Compact Precomputed Voxelized Shadows scheme. Pros: you can have 256Kx256K resolution shadow maps that are cheap to filter. Cons: no moving your stuff, expensive precomputations (if not as expensive as in that paper). Xerophyte fucked around with this message at 18:53 on Jul 7, 2015 |
# ? Jul 7, 2015 18:50 |
|
Definitely don't think of texture memory as giant contiguous sequential arrays of bytes that would normally thrash a cpu cache if you accessed all at once. I don't know of a great analogy though -- maybe like a bunch of students that need a bunch of books stored in a library that they all access by talking to a librarian and they then bring back as many books as they can. Also if you're using a modern glsl version you can go hog wild with integer operations. Sex Bumbo fucked around with this message at 19:25 on Jul 7, 2015 |
# ? Jul 7, 2015 19:19 |
|
Looks like my lust for D3D12 is going to force me into Windows 10. Got to figure out what the slow parts are right away or else I won't be able to figure out what horrible flaws the next crop of papers are trying to hide
|
# ? Jul 26, 2015 18:30 |
If I have a 2D array texture with two layers in OpenGL, am I wrong in assuming that I would access the contents of the first layer like so?:code:
|
|
# ? Jul 26, 2015 21:35 |
|
Did you bind it correctly?CommunityEdition posted:Looks like my lust for D3D12 is going to force me into Windows 10. Got to figure out what the slow parts are right away or else I won't be able to figure out what horrible flaws the next crop of papers are trying to hide I'm in full hype mode, DX12 is going to solve all our problems. World peace, climate change, the heat death of the universe, it's going to be great.
|
# ? Jul 26, 2015 22:39 |
Sex Bumbo posted:Did you bind it correctly? I bound it like I would any other texture. C++ code:
C++ code:
C++ code:
Joda fucked around with this message at 23:38 on Jul 26, 2015 |
|
# ? Jul 26, 2015 23:33 |
|
Sex Bumbo posted:I'm in full hype mode, DX12 is going to solve all our problems. World peace, climate change, the heat death of the universe, it's going to be great. I might try out DX12, especially because the dev tools look pretty neato.
|
# ? Jul 26, 2015 23:46 |
Joda posted:I bound it like I would any other texture. I figured it out. I'd somehow missed that the boiler plate code I took my parameters from had mipmap in the minification filter. Changed all paramters to int and min filter to linear and at least it can draw the top layer now.
|
|
# ? Jul 27, 2015 01:57 |
|
I'm having an issue with dx12 where the following code copied from samples fails saying the adapter isn't compatible:code:
The specific error is: "hr = 0x887a0004 : The specified device interface or feature level is not supported on this system. " The device description is: "Description = 0x0397eedc L"NVIDIA GeForce GTX 560M " revision 161 Sex Bumbo fucked around with this message at 02:36 on Aug 3, 2015 |
# ? Aug 3, 2015 02:32 |
|
Hey guys, I'm a 3D noob and want to take a model I made with Agisoft Photoscan (structure from motion) and make a short animation of flying over it. What's the best/cheapest/whatever program to do this quickly?
|
# ? Aug 5, 2015 01:12 |
|
Blender?
|
# ? Aug 5, 2015 19:02 |
|
Is there a cheatsheet for determining what version of opengl some source is using? For example, glBegin()/glEnd() means it's time to walk away because that's 1995 vintage code. What's a good grep term for opengl 3.3 / opengl 4.x code? I'm thinking of just searching the source for "#using", it'll show up in either inlined shader source or standalone shader files - but that only gives a lower bound - if the shaders have been in the code a while and unchanged they'll continue to be compatible. I've been looking through various OGL programs and unless they prominently declare the version they target, I have to examine their render loop to figure out what features they're using.
|
# ? Aug 6, 2015 14:26 |
|
Nope -- and sometimes it's ambiguous. New versions of OpenGL are made by simply taking extensions from previous versions and folding them into the core, so you can't say whether it's "OpenGL 4.2 with all these extensions" or "OpenGL 4.4". Out of curiosity, why do you want to know?
|
# ? Aug 6, 2015 16:47 |
|
Don't they remove the ARB suffix when that happens?
|
# ? Aug 6, 2015 16:58 |
|
They used to do that, and then they realized that nobody liked the double-dispatch and would continue using the ARB extensions because they always have more driver support, etc., so now they just say "this extension is part of OpenGL 4.4" and such.
|
# ? Aug 6, 2015 17:01 |
|
Suspicious Dish posted:Nope -- and sometimes it's ambiguous. New versions of OpenGL are made by simply taking extensions from previous versions and folding them into the core, so you can't say whether it's "OpenGL 4.2 with all these extensions" or "OpenGL 4.4". I've been going through open source opengl games/engines looking for modern ones so I can play around with shaders on a 'real' project. One of the biggies is rendering to a framebuffer then doing a final shader before going to screen - things like sine-wave ripple displacement effects across the screen. I don't think you can do a displacement without first rendering a flat engine to a framebuffer, right? Trying to hack that into an immediate mode setup like glquake sucks. Edit: On that count, I can't figure out why on the same machine some engines have different shader limits: code:
Harik fucked around with this message at 12:17 on Aug 9, 2015 |
# ? Aug 7, 2015 19:10 |
|
I want to render a concave, textured polygon in OpenGL but I'm having a hard time finding good resources/algorithms on how to do the tessellation + uv mapping of the resulting tris. I know there are some GLUT functions that does this but I don't think that would work if I wanted to port this to mobile for example.
|
# ? Aug 8, 2015 12:48 |
|
So I have this problem in which I am making a Civ grid like board game in Unity, but have a problem in that if I have two tiles with different terrain textures next to each other it leaves a really ugly seam; so clearly I want some sort of texture splatting. Does anyone know how using shaders GLSH/HLSL I can sample a given objects texture and then do some sort of directional blending? I wouldn't want the whole tile blender for instance, just the edges.
|
# ? Aug 15, 2015 16:25 |
Notice, I don't know how you're handling actually drawing the grid, but this is based on the assumption that you know in the shaders what the coordinates of the current tile is. I'm not too familiar with Unity. With pure OpenGL you can upload an array of texture samplers representing the grid (just keep in mind that there's a hard limit on how many uniforms you can upload, and it depends on hardware platform) and then based on distance to grid separators do interpolation between closest neighboring tile(s) and current tile. A faster alternative to uploading uniforms every frame (espeially if the map layout is static) is generating a single integer texture that holds the texture ID for every tile, then upload all grid tile texture samplers in an array and using the index extracted from the ID texture. Texture generation and sampling would look something like this: C++ code:
code:
You also need to make sure that grid_textures are all uploaded in the same order every time (obviously). Also, the above assumes square tiles. Refurbishing it for hexagons or whatever shouldn't be too hard. I hope that was at least somewhat helpful. E: I am assuming here that you are drawing the entire grid in a single draw call. Also, there's a million different ways to solve these kinds of problems, this is just how I'd probably do it. E3: If you're asking how to do the actual interpolation between neighbours, it's just a question of finding the function that looks the best. You probably want something inversely exponential based on distance from dividing line. (i.e. so it goes very fast from .5 neighbour .5 self at edge to 1 self 0 neighbour approaching the center.) Joda fucked around with this message at 17:50 on Aug 15, 2015 |
|
# ? Aug 15, 2015 17:07 |
|
|
# ? Apr 26, 2024 08:04 |
|
The grid is just a large number of 3D hexagonal objects. So it isn't necessarily flat.quote:(espeially if the map layout is static) Yes, I don't plan on changing the map once it's generated. Another key thing though is that it is procedural and thus semi random. quote:I'm not too familiar with Unity. With pure OpenGL you can upload an array of texture samplers representing the grid (just keep in mind that there's a hard limit on how many uniforms you can upload, and it depends on hardware platform) and then based on distance to grid separators do interpolation between closest neighboring tile(s) and current tile. So lets see if I can vaguely understand the code before I'm off to learn Shaders from the shader wizards: code:
code:
code:
code:
code:
code:
So to take a guess at blending only for UV's corresponding to a given direction; If I know which direction the adjacent tile is, and I know roughly where my UV's are because they are in a grid (from 0 to 1 right from the bottom left corner of the mesh?); so if uvs[u-value] is > Something and < SomethingElse blend with the adjacent texture? This sounds like it may be a roughly triangle shape patch of the mesh be blended which may still present some artifacts though?
|
# ? Aug 15, 2015 17:59 |