Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Is there a way I can draw to a framebuffer in OpenGL such that if a pixel has been written to once then it is locked into that color and cannot be overwritten? I can't use the stencil or depth buffer and the pixel values could have alpha < 1.
Like if i clear it to (0,0,0,0), then only pixels with value == (0,0,0,0) should be allowed to be written to.

Or to put another way, if the alpha value is non-zero then don't let it be drawn over anymore.

Adbot
ADBOT LOVES YOU

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

As I understand it, shaders can't read pixel data out of the framebuffer.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

OK I thought about the issue some more and it turns out that color idea would be a hack that wouldn't quite do what I want anyways.

What I really need is to render the same thing to two different depth buffers. I have a number of meshes to render, and I want one of the depth buffers to be cleared in between drawing each mesh, the other should build up the depth data for the whole scene, only clear at the beginning of each frame.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Hi this is not specifically for OpenGL/DX but I have a 3D geometry question. I'm looking for some info on how to implement a general algorithm that can take a 3d triangle mesh (closed/solid) and a Z value and return the 2d intersection of the 3d object with the plane at Z.

So far, I'm thinking to loop over every edge, and if the edge spans the given z value, calculate the intersection point with simple linear interpolation and then you can connect the dots sort of. But the trickier part seems to be knowing how to connect all these points, and knowing which polygons of the 2d cut are "holes".

The input data I have is in the form of a list of 3D points plus a list of indices of points for each triangle in the mesh.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Ralith posted:

Every triangle in the 3D mesh corresponds to 0, 1, or 2 vertices in the set of 2D polygons that represent the slice you're looking for. If the mesh is watertight, each vertex should lie on a multiple of two edges if the polygon it's associated with is nonempty. One approach would therefore be to loop over the set of triangles, compute the edge or vertex (if any) contributed by that triangle, and then compute the polygons by deduplicating vertexes. To determine which side of any given edge is "inside," just project the normal of the associated triangle onto your plane (or store your edges in a way manner that encodes the sidedness in the first place).
How would you handle coplanar tris, or tri having a single edge coincident with the plane. Also, I'm thinking for the tris where only one point intersects the plane that those can be safely ignored?

I should probably also mention that this is intended to eventually re-mesh the whole 3d object, so that all the vertices in the new mesh are aligned with layers (similar to how 3d printing "slicer" programs work).
So another challenge is that I'd like to be able to determine which vertices connect between two different Z layers.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I have this idea to define a sort of B-rep solid 3D model in a probabalistic way. Given a set of trivariate normal distributions(gaussian probability blobs in 3D space), I want to take a weighted sum of each probability distribution in the set, and render a contour surface wherever this sum equals some threshold.

Has this sort of thing been done? I'm thinking I'd need to do raymarching to find the surface?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Xerophyte posted:

They're similar but Gaussians are not signed distance fields. The value of the Gaussian is not the distance from its center, and if you use SDF algorithms with them they'll break in weird ways.

The general category here is "level set", of which implicit surfaces are a subset, and SDF geometry and metaballs are subset of those. SDFs can be cheaply raymarched, since they provide a bound on the ray step size by design. General implicit surfaces can be harder to march, so using marching cubes to mesh them is the more common approach.

E: Inigo Quilez's SDF raymarching site has some good examples of how that can be used for ridiculously complex scenes if you want to feel inadequate.

Yeah I saw Inigo's site, and its crazy impressive but really light on details of how the hell most of it is done.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Xerophyte posted:

The distance functions page he has covers most of the functions used. IQ is a crazy demoscener savant at actually modelling things with nothing but simple distance fields and transforms combined in nutty ways, but that covers most the operations you have outside of highly specific fractals.

If you're wondering about how to do the actual ray marching, it both is and is not complex. Making a simple ray marcher is easy. Making one that's both fast and robust tends to involve a lot of scene-specific tweaking.

A very basic conservative SDF ray marcher will look something like
C++ code:
class Ray {
  float3 o;   // origin
  float3 dir; // direction
};

// Minimum threshold: any point closer to the surface than this is counted as part of it.
const float kMinThreshold = 1e-5;

// Maximum threshold: if we are ever this far away from the surface then assume we will never
// intersect. In practice you'll probably clamp the ray to some bounding volume instead.
const float kMaxThreshold = 1e2;

bool march_ray(Ray& r, SDF f) {
  float distance;
  while(true) {
    // Compute the distance to the surface. We can move this far without intersecting.
    distance = f(r.o);

    // Specifically, we can move this far in the ray direction without intersecting.
    r.o = r.o + distance * r.dir;

    // If we're sufficiently close to the surface, count as an intersection.
    if (distance < kMinThreshold) {
      return true;
    }

    // If we're sufficiently far away from the surface, count as escaping.
    if (kMaxThreshold < distance) {
      return false;
    }
  }
}
This has a couple of problems:
1. It only works if the SDF is an actual SDF. If it's an approximation that can be larger than the actual distance to the surface then the marching breaks. Such approximations are common in practice. They occur if you have some complex fractal that you can't get a true distance field for, or if you are applying any sort of warping or displacement function to a real SDF, or if you are using a non-Euclidian distance norm, and so on.
2. If the ray is moving perpendicular to the gradient of the distance field (i.e. the ray is parallel with a plane, slab or box) then this can be horrifically slow as you take a ton of tiny steps without ever getting closer.

In practice, most SDFs people use to do cool hings are not really SDFs and you probably need to change the naive r.o = r.o + distance * r.dir; marching step. Exactly what "more careful" means tends to be very scene-specific. Common tweaks include:
- Multiplying the step size with a tweakable global scale factor.
- Adaptively increasing the step size if the distance is changing slower than expected between iterations.
- Clamping the step size to some min and max bounds, then doing a binary search to refine the intersection point once you've concluded one exists.
Finding the right set of tweaks for your scene tends to be challenging. If you get them wrong then you get stuff like this where sometimes the marcher will fail to intersect the surface.

For non-SDF raymarching -- when your surface is implicitly defined with something like a bool is_inside(point p) -- it's common to just use a fixed step size, possibly with a binary search step to refine intersections. This can be very, very slow, which is why even approximate SDFs are nice.

E: The code initially used do-while for some reason, but I decided that this made me feel unclean.
OK but that example code would just draw a sphere(for example) as completely flat / indistinguishable from a solid circle, right? How is it shaded? Is the surface normal also computed?
Also I'm interested in creating a closed tessellated mesh (exporting to STL format) from this surface. I am starting to understand how it can be rendered on screen with shaders, but is there any particular approach to tesselating based on SDFs? Or would SDF be a poor fit for a task like that?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Are there any online calculator or utility that help create transformation matrices?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Colonel J posted:

I really enjoyed this shader deconstruction by Inigo Quilez : https://www.youtube.com/watch?v=Cfe5UQ-1L9Q

Dude is really good, and he makes it quite accessible.
drat, I've seen some of this guy's demos and web pages which are really informative and impressive. So I'm interested in checking this out eventually, but holy poo poo 6 hours long!

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I never really got into blender, but "design aid" (for what kind of project?) sounds like you want something more like a CAD program. Fusion 360 still has a no-cost licensing option last I checked. Or if you want to go open-source turbonerd like me, you can use OpenSCAD (FreeCAD too even).

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

As a general sort of workflow concept in CAD, you should try to get in the habit of defining as much as you can within 2D sketches for various profiles of your object(each sketch can be defined on different planes in 360), and then extruding, lofting, etc from that 2D sketch into 3D and doing any 3D operations last.
Like you can also just go at it with 3D primitives straight from the start, but things get messy that way for non-trivial designs.

I'd also recommend learning how parametric modeling modeling works in 360, it can be a big time saver for tweaking designs without completely re-doing them. Basically you can define parameters/variables (this side length, that angle, etc.) in the beginning, build your sketches using those variable names, then tweak parameters later with some quick edits of those values, and the whole design will re-adjust to match. You'll need a good grasp on constraints too, to get the most out of it.

And check out "Lars Christensen" on Youtube, he has a ton of content F360 content. A lot of it might be more advanced than you're ready for, but I think he has a beginner tutorial playlist too.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Dominoes posted:

Love the parametric approach. It feels like how a programmer would 3d model. This tutorial is very nice. It walks you through how to build something, and in doing so lets you use various tools and features in a way where you see what you'd use them for.
I just briefly skimmed it and looks decent. Only thing I would add, which I feel is often overlooked, is that fillet #1 could have basically been done in the sketch too!

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I need help debugging some OpenGL code which is very old and crusty (still has a mix of some fixed function pipeline stuff in there :stonk:).
Right now I'm trying to find the source of some weird graphical glitches which only show up on the Mac CI server, which is running:
OpenGL Version: 2.1 APPLE-16.7.4
GL Renderer: Apple Software Renderer

The report from the server includes framebuffer screenshots, and the glitches show as perfectly horizontal blank lines for various 3d rendered triangles, and the background just shows through. Each tri has a different set of these lines missing (exact lines are not global to screen/framebuffer).

One thing I just noticed is that the shader code is basically written with single precision float/vec3/vec4 variables in mind, but when vertex attributes are passed to GPU, glVertexAttrib3d is called, so its passing in doubles.
So my question at the moment is would mixing single/double precision in that way likely cause problems?
Does OpenGL just know to coerce doubles into floats, or is there some risk of writing out of bounds with these double width values, or behaviour is undefined in such cases, or what?

I don't use Macs so its a bit difficult to debug the problem via remote CI server.
I updated the shader recently which introduced these glitches, but it tested fine on other Linux systems etc.

Only other thing I can think is that the Apple Software Renderer has some bug in its fwidth function, which was one part of my changes.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Hey, got a question about OpenGL VBOs and using indexed glDrawElements[BaseVertex], with non-interleaved attributes.

So let's say I'm rendering a cube: 8 vertices, 6 faces, 12 tris. Forget about triangle strips for the moment and lets say I'm using basic GL_TRIANGLES, so there would be 12*3 = 36 vertex indices.
I have a shader which takes a vertex attribute relating to how edges are rendered: eg "internal" edges of cube faces would be omitted.
I want to avoid duplication of vertex coordinate data, but each vertex of the cube could be used in anywhere from 3 up to 6 triangles (or 4.5 on avg), and I would want that vertex attribute to possibly be different in each case.

Since the attributes are not interleaved, is there any way to have them indexed separately from the vertices?

As I understand, you can set up the initial location and stride with glVertexAttribPointer, but how does that work with indexed elements? Is it always going to basically follow vertex_index * attribute_stride when accessing these attributes even from separate buffers, or can it be made to just sequentially/directly go through that attribute buffer, while indexing the vertices indirectly?

Adbot
ADBOT LOVES YOU

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

How can I calculate if a planar polygon in 3d space is front or rear facing in relation to the camera? I need to apply a glPolygonOffset in opposite direction depending on which side is visible.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply