Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«68 »
  • Post
  • Reply
Xerophyte
Mar 17, 2008

This space intentionally left blank


peepsalot posted:

Yeah I saw Inigo's site, and its crazy impressive but really light on details of how the hell most of it is done.

The distance functions page he has covers most of the functions used. IQ is a crazy demoscener savant at actually modelling things with nothing but simple distance fields and transforms combined in nutty ways, but that covers most the operations you have outside of highly specific fractals.

If you're wondering about how to do the actual ray marching, it both is and is not complex. Making a simple ray marcher is easy. Making one that's both fast and robust tends to involve a lot of scene-specific tweaking.

A very basic conservative SDF ray marcher will look something like
C++ code:
class Ray {
  float3 o;   // origin
  float3 dir; // direction
};

// Minimum threshold: any point closer to the surface than this is counted as part of it.
const float kMinThreshold = 1e-5;

// Maximum threshold: if we are ever this far away from the surface then assume we will never
// intersect. In practice you'll probably clamp the ray to some bounding volume instead.
const float kMaxThreshold = 1e2;

bool march_ray(Ray& r, SDF f) {
  float distance;
  while(true) {
    // Compute the distance to the surface. We can move this far without intersecting.
    distance = f(r.o);

    // Specifically, we can move this far in the ray direction without intersecting.
    r.o = r.o + distance * r.dir;

    // If we're sufficiently close to the surface, count as an intersection.
    if (distance < kMinThreshold) {
      return true;
    }

    // If we're sufficiently far away from the surface, count as escaping.
    if (kMaxThreshold < distance) {
      return false;
    }
  }
}
This has a couple of problems:
1. It only works if the SDF is an actual SDF. If it's an approximation that can be larger than the actual distance to the surface then the marching breaks. Such approximations are common in practice. They occur if you have some complex fractal that you can't get a true distance field for, or if you are applying any sort of warping or displacement function to a real SDF, or if you are using a non-Euclidian distance norm, and so on.
2. If the ray is moving perpendicular to the gradient of the distance field (i.e. the ray is parallel with a plane, slab or box) then this can be horrifically slow as you take a ton of tiny steps without ever getting closer.

In practice, most SDFs people use to do cool hings are not really SDFs and you probably need to change the naive r.o = r.o + distance * r.dir; marching step. Exactly what "more careful" means tends to be very scene-specific. Common tweaks include:
- Multiplying the step size with a tweakable global scale factor.
- Adaptively increasing the step size if the distance is changing slower than expected between iterations.
- Clamping the step size to some min and max bounds, then doing a binary search to refine the intersection point once you've concluded one exists.
Finding the right set of tweaks for your scene tends to be challenging. If you get them wrong then you get stuff like this where sometimes the marcher will fail to intersect the surface.

For non-SDF raymarching -- when your surface is implicitly defined with something like a bool is_inside(point p) -- it's common to just use a fixed step size, possibly with a binary search step to refine intersections. This can be very, very slow, which is why even approximate SDFs are nice.

E: The code initially used do-while for some reason, but I decided that this made me feel unclean.

Xerophyte fucked around with this message at Oct 28, 2017 around 14:04

Adbot
ADBOT LOVES YOU

Spatial
Nov 15, 2007



Most of IQ's demos are up on ShaderToy. Full source and realtime editing to learn from.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!



Xerophyte posted:

The distance functions page he has covers most of the functions used. IQ is a crazy demoscener savant at actually modelling things with nothing but simple distance fields and transforms combined in nutty ways, but that covers most the operations you have outside of highly specific fractals.

If you're wondering about how to do the actual ray marching, it both is and is not complex. Making a simple ray marcher is easy. Making one that's both fast and robust tends to involve a lot of scene-specific tweaking.

A very basic conservative SDF ray marcher will look something like
C++ code:
class Ray {
  float3 o;   // origin
  float3 dir; // direction
};

// Minimum threshold: any point closer to the surface than this is counted as part of it.
const float kMinThreshold = 1e-5;

// Maximum threshold: if we are ever this far away from the surface then assume we will never
// intersect. In practice you'll probably clamp the ray to some bounding volume instead.
const float kMaxThreshold = 1e2;

bool march_ray(Ray& r, SDF f) {
  float distance;
  while(true) {
    // Compute the distance to the surface. We can move this far without intersecting.
    distance = f(r.o);

    // Specifically, we can move this far in the ray direction without intersecting.
    r.o = r.o + distance * r.dir;

    // If we're sufficiently close to the surface, count as an intersection.
    if (distance < kMinThreshold) {
      return true;
    }

    // If we're sufficiently far away from the surface, count as escaping.
    if (kMaxThreshold < distance) {
      return false;
    }
  }
}
This has a couple of problems:
1. It only works if the SDF is an actual SDF. If it's an approximation that can be larger than the actual distance to the surface then the marching breaks. Such approximations are common in practice. They occur if you have some complex fractal that you can't get a true distance field for, or if you are applying any sort of warping or displacement function to a real SDF, or if you are using a non-Euclidian distance norm, and so on.
2. If the ray is moving perpendicular to the gradient of the distance field (i.e. the ray is parallel with a plane, slab or box) then this can be horrifically slow as you take a ton of tiny steps without ever getting closer.

In practice, most SDFs people use to do cool hings are not really SDFs and you probably need to change the naive r.o = r.o + distance * r.dir; marching step. Exactly what "more careful" means tends to be very scene-specific. Common tweaks include:
- Multiplying the step size with a tweakable global scale factor.
- Adaptively increasing the step size if the distance is changing slower than expected between iterations.
- Clamping the step size to some min and max bounds, then doing a binary search to refine the intersection point once you've concluded one exists.
Finding the right set of tweaks for your scene tends to be challenging. If you get them wrong then you get stuff like this where sometimes the marcher will fail to intersect the surface.

For non-SDF raymarching -- when your surface is implicitly defined with something like a bool is_inside(point p) -- it's common to just use a fixed step size, possibly with a binary search step to refine intersections. This can be very, very slow, which is why even approximate SDFs are nice.

E: The code initially used do-while for some reason, but I decided that this made me feel unclean.
OK but that example code would just draw a sphere(for example) as completely flat / indistinguishable from a solid circle, right? How is it shaded? Is the surface normal also computed?
Also I'm interested in creating a closed tessellated mesh (exporting to STL format) from this surface. I am starting to understand how it can be rendered on screen with shaders, but is there any particular approach to tesselating based on SDFs? Or would SDF be a poor fit for a task like that?

Xerophyte
Mar 17, 2008

This space intentionally left blank


peepsalot posted:

OK but that example code would just draw a sphere(for example) as completely flat / indistinguishable from a solid circle, right? How is it shaded? Is the surface normal also computed?
Also I'm interested in creating a closed tessellated mesh (exporting to STL format) from this surface. I am starting to understand how it can be rendered on screen with shaders, but is there any particular approach to tesselating based on SDFs? Or would SDF be a poor fit for a task like that?

For shading you need the surface normal, yes. For the surface of a solid defined by an SDF the normal can be computed: the normal of the surface is the normalized gradient of the distance field. In some cases the gradient can be computed analytically, i.e. for a sphere at point m_center with a radius of m_radius you'd do something like
C++ code:
void sdf(float3 p, float* sdf, float3* normal) {
  float3 sphere_to_point       = p - m_center;
  float  distance_from_center  = length(sphere_to_point);

  *sdf    = distance_from_center - m_radius;
  *normal = sphere_to_point / distance_from_center;
}
When that's not possible you can also compute the gradient numerically by computing the finite difference.
C++ code:
// Compute the gradient of a distance field by using the forward difference. This is cheap but can 
// have issues with floating point precision since you need a small EPSILON. Using a central
// difference is more accurate, but then you need to compute the SDF 6 times instead of 3 times.
float3 normalized_gradient(SDF sdf, float3 p, float sdf_at_p) {
  const float EPSILON = 1e-5f;
  float3      gradient =
      float3(sdf(float3(p.x + EPSILON, p.y, p.z)) - sdf_at_p,
             sdf(float3(p.x, p.y + EPSILON, p.z)) - sdf_at_p,
             sdf(float3(p.x, p.y, p.z + EPSILON)) - sdf_at_p);
  return gradient / length(gradient);
}
The finite difference approach is usually necessary since a lot of the more complex transforms (soft minimums, domain warps, etc) you can do on an SDF make it hard to compute the gradient analytically.

Texturing is trickier. You can do various projections (e.g. planar, cylindrical), but there are no custom uv wraps like you might do for a mesh. For fractal SDFs people sometimes texture with things like some iteration count or a projected coordinate.


I'm not really that familiar with meshing algorithms. I don't believe SDFs can be meshed in in a better way than any other implicit surface there. Marching cubes is dead simple and a good place to start if you want to make your own mesher, but the resulting mesh quality is pretty crappy. Higher-quality meshing algorithms exist but they're complex and involve various trade-offs. My impression is that you probably don't want to roll your own. CGAL is apparently a library that exists for this sort of thing, I have no idea how good it is.

Anecdotally, I know the approach Weta took for meshing the various SDF fractals they used for Ego in Guardians of the Galaxy 2 was to do a bunch of simple renders of the SDF, feed those renders into their photogrammetry software, then get a point cloud, then mesh that. I don't think I'd recommend that approach, but apparently it's good enough for production.

Xerophyte fucked around with this message at Oct 30, 2017 around 06:22

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today


Xerophyte posted:

I'm not really that familiar with meshing algorithms. I don't believe SDFs can be meshed in in a better way than any other implicit surface there.
The fact that you can fairly reliably extract a normal means they're easier to mesh well than things that only give you in/out, at least. Obviously that's only relevant if you're using a more advanced meshing algo than marching cubes.

Xerophyte posted:

Anecdotally, I know the approach Weta took for meshing the various SDF fractals they used for Ego in Guardians of the Galaxy 2 was to do a bunch of simple renders of the SDF, feed those renders into their photogrammetry software, then get a point cloud, then mesh that. I don't think I'd recommend that approach, but apparently it's good enough for production.
That's hilariously hacky.

Xerophyte
Mar 17, 2008

This space intentionally left blank


Ralith posted:

That's hilariously hacky.

Basically, their use case was that they were rendering scenes set inside of a giant Sierpinski gasket planet, and they want to feed to that geometry to their artists for manual edits. They first tried various more direct meshing approaches and they gave a uniform level of detail, which was either too big to use or lacked detail in the foreground. Feeding the photogrammetry software a bunch of renders from the right position gave them a point cloud of appropriate density, which became meshes that were good enough for artists to work with.

You could definitely just generate the point cloud from the SDF or directly generate a Delaunay triangulation which takes the camera position into account, but the photogrammetry round trip was good enough and saved time so...

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!



Are there any online calculator or utility that help create transformation matrices?

Xerophyte
Mar 17, 2008

This space intentionally left blank


peepsalot posted:

Are there any online calculator or utility that help create transformation matrices?

There's a bunch of matrix and linear algebra libraries that can help you. Eigen is pretty popular.

lord funk
Feb 16, 2004



Anyone have a good example of using touch movement to rotate a 3D object around its center axis? I know I have to transform the angle based on the camera view matrix, but I also have that problem where when you lift your touch and put it back down, the model matrix is oriented to its last position and doesn't match what you might consider 'up' and 'down'.

edit: nm this looks like a good one:
http://www.learnopengles.com/rotati...h-touch-events/

lord funk fucked around with this message at Nov 16, 2017 around 16:37

UncleBlazer
Jan 27, 2011



lord funk posted:

Anyone have a good example of using touch movement to rotate a 3D object around its center axis? I know I have to transform the angle based on the camera view matrix[...]

If you're happy doing matrix manipulation then it's cool but I'd recommend quaternions for rotations, I found them less of a headache. Not that it answers your touch issue though, sorry!

Adbot
ADBOT LOVES YOU

Zerf
Dec 17, 2004

I miss you, sandman


peepsalot posted:

Are there any online calculator or utility that help create transformation matrices?

I use WolframAlpha quite much; it's super handy. For example, it can do symbolic matrix inversions etc.

What are you after specifically?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«68 »