|
Is there a way I can draw to a framebuffer in OpenGL such that if a pixel has been written to once then it is locked into that color and cannot be overwritten? I can't use the stencil or depth buffer and the pixel values could have alpha < 1. Like if i clear it to (0,0,0,0), then only pixels with value == (0,0,0,0) should be allowed to be written to. Or to put another way, if the alpha value is non-zero then don't let it be drawn over anymore.
|
# ¿ Nov 20, 2015 04:30 |
|
|
# ¿ Apr 25, 2024 02:59 |
|
As I understand it, shaders can't read pixel data out of the framebuffer.
|
# ¿ Nov 20, 2015 05:21 |
|
OK I thought about the issue some more and it turns out that color idea would be a hack that wouldn't quite do what I want anyways. What I really need is to render the same thing to two different depth buffers. I have a number of meshes to render, and I want one of the depth buffers to be cleared in between drawing each mesh, the other should build up the depth data for the whole scene, only clear at the beginning of each frame.
|
# ¿ Nov 20, 2015 22:55 |
|
Hi this is not specifically for OpenGL/DX but I have a 3D geometry question. I'm looking for some info on how to implement a general algorithm that can take a 3d triangle mesh (closed/solid) and a Z value and return the 2d intersection of the 3d object with the plane at Z. So far, I'm thinking to loop over every edge, and if the edge spans the given z value, calculate the intersection point with simple linear interpolation and then you can connect the dots sort of. But the trickier part seems to be knowing how to connect all these points, and knowing which polygons of the 2d cut are "holes". The input data I have is in the form of a list of 3D points plus a list of indices of points for each triangle in the mesh.
|
# ¿ Jul 11, 2017 19:24 |
|
Ralith posted:Every triangle in the 3D mesh corresponds to 0, 1, or 2 vertices in the set of 2D polygons that represent the slice you're looking for. If the mesh is watertight, each vertex should lie on a multiple of two edges if the polygon it's associated with is nonempty. One approach would therefore be to loop over the set of triangles, compute the edge or vertex (if any) contributed by that triangle, and then compute the polygons by deduplicating vertexes. To determine which side of any given edge is "inside," just project the normal of the associated triangle onto your plane (or store your edges in a way manner that encodes the sidedness in the first place). I should probably also mention that this is intended to eventually re-mesh the whole 3d object, so that all the vertices in the new mesh are aligned with layers (similar to how 3d printing "slicer" programs work). So another challenge is that I'd like to be able to determine which vertices connect between two different Z layers.
|
# ¿ Jul 11, 2017 22:26 |
|
I have this idea to define a sort of B-rep solid 3D model in a probabalistic way. Given a set of trivariate normal distributions(gaussian probability blobs in 3D space), I want to take a weighted sum of each probability distribution in the set, and render a contour surface wherever this sum equals some threshold. Has this sort of thing been done? I'm thinking I'd need to do raymarching to find the surface?
|
# ¿ Oct 26, 2017 09:55 |
|
Xerophyte posted:They're similar but Gaussians are not signed distance fields. The value of the Gaussian is not the distance from its center, and if you use SDF algorithms with them they'll break in weird ways. Yeah I saw Inigo's site, and its crazy impressive but really light on details of how the hell most of it is done.
|
# ¿ Oct 28, 2017 05:30 |
|
Xerophyte posted:The distance functions page he has covers most of the functions used. IQ is a crazy demoscener savant at actually modelling things with nothing but simple distance fields and transforms combined in nutty ways, but that covers most the operations you have outside of highly specific fractals. Also I'm interested in creating a closed tessellated mesh (exporting to STL format) from this surface. I am starting to understand how it can be rendered on screen with shaders, but is there any particular approach to tesselating based on SDFs? Or would SDF be a poor fit for a task like that?
|
# ¿ Oct 29, 2017 21:19 |
|
Are there any online calculator or utility that help create transformation matrices?
|
# ¿ Nov 13, 2017 20:32 |
|
Colonel J posted:I really enjoyed this shader deconstruction by Inigo Quilez : https://www.youtube.com/watch?v=Cfe5UQ-1L9Q
|
# ¿ Sep 30, 2019 19:22 |
|
I never really got into blender, but "design aid" (for what kind of project?) sounds like you want something more like a CAD program. Fusion 360 still has a no-cost licensing option last I checked. Or if you want to go open-source turbonerd like me, you can use OpenSCAD (FreeCAD too even).
|
# ¿ Feb 2, 2020 02:40 |
|
As a general sort of workflow concept in CAD, you should try to get in the habit of defining as much as you can within 2D sketches for various profiles of your object(each sketch can be defined on different planes in 360), and then extruding, lofting, etc from that 2D sketch into 3D and doing any 3D operations last. Like you can also just go at it with 3D primitives straight from the start, but things get messy that way for non-trivial designs. I'd also recommend learning how parametric modeling modeling works in 360, it can be a big time saver for tweaking designs without completely re-doing them. Basically you can define parameters/variables (this side length, that angle, etc.) in the beginning, build your sketches using those variable names, then tweak parameters later with some quick edits of those values, and the whole design will re-adjust to match. You'll need a good grasp on constraints too, to get the most out of it. And check out "Lars Christensen" on Youtube, he has a ton of content F360 content. A lot of it might be more advanced than you're ready for, but I think he has a beginner tutorial playlist too.
|
# ¿ Feb 4, 2020 07:28 |
|
Dominoes posted:Love the parametric approach. It feels like how a programmer would 3d model. This tutorial is very nice. It walks you through how to build something, and in doing so lets you use various tools and features in a way where you see what you'd use them for.
|
# ¿ Feb 5, 2020 02:53 |
|
I need help debugging some OpenGL code which is very old and crusty (still has a mix of some fixed function pipeline stuff in there ). Right now I'm trying to find the source of some weird graphical glitches which only show up on the Mac CI server, which is running: OpenGL Version: 2.1 APPLE-16.7.4 GL Renderer: Apple Software Renderer The report from the server includes framebuffer screenshots, and the glitches show as perfectly horizontal blank lines for various 3d rendered triangles, and the background just shows through. Each tri has a different set of these lines missing (exact lines are not global to screen/framebuffer). One thing I just noticed is that the shader code is basically written with single precision float/vec3/vec4 variables in mind, but when vertex attributes are passed to GPU, glVertexAttrib3d is called, so its passing in doubles. So my question at the moment is would mixing single/double precision in that way likely cause problems? Does OpenGL just know to coerce doubles into floats, or is there some risk of writing out of bounds with these double width values, or behaviour is undefined in such cases, or what? I don't use Macs so its a bit difficult to debug the problem via remote CI server. I updated the shader recently which introduced these glitches, but it tested fine on other Linux systems etc. Only other thing I can think is that the Apple Software Renderer has some bug in its fwidth function, which was one part of my changes.
|
# ¿ May 25, 2020 23:53 |
|
Hey, got a question about OpenGL VBOs and using indexed glDrawElements[BaseVertex], with non-interleaved attributes. So let's say I'm rendering a cube: 8 vertices, 6 faces, 12 tris. Forget about triangle strips for the moment and lets say I'm using basic GL_TRIANGLES, so there would be 12*3 = 36 vertex indices. I have a shader which takes a vertex attribute relating to how edges are rendered: eg "internal" edges of cube faces would be omitted. I want to avoid duplication of vertex coordinate data, but each vertex of the cube could be used in anywhere from 3 up to 6 triangles (or 4.5 on avg), and I would want that vertex attribute to possibly be different in each case. Since the attributes are not interleaved, is there any way to have them indexed separately from the vertices? As I understand, you can set up the initial location and stride with glVertexAttribPointer, but how does that work with indexed elements? Is it always going to basically follow vertex_index * attribute_stride when accessing these attributes even from separate buffers, or can it be made to just sequentially/directly go through that attribute buffer, while indexing the vertices indirectly?
|
# ¿ Nov 29, 2020 13:27 |
|
|
# ¿ Apr 25, 2024 02:59 |
|
How can I calculate if a planar polygon in 3d space is front or rear facing in relation to the camera? I need to apply a glPolygonOffset in opposite direction depending on which side is visible.
|
# ¿ Aug 11, 2021 20:30 |