Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sinc
Jul 6, 2008
Anyone using RenderMan? What's the opinion on it these days, is it still the thing to know in serious rendering business?

I finally took some time to experiment a little with the shading language and here's what I got so far - it's supposed to look like wet asphalt or something (the sphere is there just for testing):



The whole thing is procedurally calculated and formed by displacement, except for the HDR environment it's reflecting. It's about 150 lines of RSL, applied on a flat plane and a sphere. I find this way of working and thinking pretty fascinating, and the reyes algorithm has some very interesting and unusual features.


Also on the linear workflow, in general... holy crap, so that's why even a simple teapot in the cloudy, dull uffizi.hdr environment looks glaringly bright on the top and almost completely black in the bottom. I always thought that the excessive contrasts had to do with some subtle tone mapping issues with computer screens, but turns out it's caused by something as relatively trivial as incorrect gamma handling. I guess some people have workflows that unintentionally work around most problems, but generally that sounds like something that definitely introduces a sense of unrealism in images, especially with GI. I wish someone had told me about this years ago.

Adbot
ADBOT LOVES YOU

sinc
Jul 6, 2008

Heintje posted:

That's loving awesome. One of my first classes at SCAD is making procedural renderman stuff and integrating it with maya, I can't wait! What render time are those clocking and on what?

Thanks. :) The rendering time is about three minutes at 1280x800 (edit: on Dual Core T7700), so that crop would be maybe 1 min. Which feels kind of fast after having played with unbiased renderers for a while. Currently it's just a fed directly to prman executable as RIB but it shouldn't be too difficult to plug it into some actual modeling program.

sinc fucked around with this message at 10:50 on Jul 7, 2008

sinc
Jul 6, 2008
Adawait, the model looks nice in general but I agree that the face has some serious scary-issues. Hard to point out why exactly, could be in the geometry too.

Gromit, yeah, that's very much possible. It gets hard to see the work with fresh eyes after having stared at it for hours, so it may actually look completely different than originally intended. Using reference photos would be a useful practice, probably.

Continuing my RenderMan experiments... not finished, but it's something:



This one's about all kinds of translucency stuff like SSS. The interesting feature (for me) is that skeletal structure that's showing through the skin... the bone geometry is rendered as a separate pass and then accessed (by screen/NDC coordinates) in the shader and blurred according to the surface distance from the skin. Of course it could be done with distributed ray tracing, but that would add either a lot of noise or a lot of rendering time. Done this way, it adds maybe half a minute even with large renders. I guess this is the kind of trickery that is preferred in actual production (this was inspired by some Siggraph RenderMan course notes by Pixar). The effect is more interesting in animation, because the internals actually move in parallax with the viewing angle, so it makes the volume look much more 3D and translucent than with SSS only. It might be a bit too dark in this one and the modeling isn't too good, though. The nose looks weird at least.

The whole thing is controlled from Houdini, which is a really interesting and powerful package as people here have said. For example, it doesn't appear to support RenderMan SSS out of the box, but it was really easy (well, once I got the hang of it) to set up a rendering operator network that automatically renders the irradiance passes and runs the point cloud filter programs when needed. I can imagine it's pretty powerful with the particle and dynamics stuff, but I haven't got that far yet.

sinc
Jul 6, 2008
Thanks for the comments, glad you liked it. I guess I need to work on the "achieving a specific look instead of just something cool" aspect though, it certainly isn't intended to look like a light bulb. :)

Heintje, I'm studying computer science somewhere in the backwoods of Europe, but I wouldn't mind ending up in the CG business once I graduate. That makes me pretty comfortable with the technical side of things I guess.

IHeartBoobs, I'm currently using Houdini more as something that feeds the geometry and settings to RenderMan and manages the passes and things like that. The actual shading is done in RenderMan Shading Language. It renders a simplified pass with some special shaders that measure the distance from the skin geometry to the bone surface along the slightly refracted viewing direction, and stores it in a non-blurred texture that is colored according to the distance. At the actual render time, it blurs different color channels at different filter sizes and adds them together as black. The color trickery is used because it gives a sufficient effect and doesn't require any intense special convolution calculations. And yeah, in a relatively straightforward case like this, it could probably be achieved as a post effect too if I was outputting a bunch of different shading components and compositing them afterwards (which could be a good idea anyway).


Useless, that's looking really good. I can't really see any major flaws that would give it away as CG. For minor nitpicks, some texture blurrines and stretching here and there might be a bit distracting. With some post processing as a final touch - maybe punchier color correction and some subtle faux lens defects - it could look very convincing.

sinc
Jul 6, 2008
Finally had some time to just play with stuff for no specific purpose, here's something I put together in the holidays. Don't know if I'm finished with it, but I really need to start doing something else now. It's modeled mostly in Max and the rest is done with Houdini/Mantra. Mostly just traditional lighting with a bunch of spotlights, shadow maps, blinns and lamberts. Friends of Russian literature might recognize the scene...

sinc
Jul 6, 2008

Heintje posted:

Houdini's pyro tools are loving awesome:


Yeah, it looks nice at least based on the videos so far. The Up Res -feature sounds very useful too, if it works as well as it says. I'm assuming it's the wavelet turbulence algorithm? I just recently happened to code a 2D fluid simulator for class (in C++) and implemented the wavelet turbulence algorithm on top of it, got results like this:

sinc
Jul 6, 2008
Here's a little Houdini test, ten square miles of procedurally generated city. Cooks in a couple of minutes from scratch to this size. There's nothing hand-modeled or textured. The spheres are placeholders for stuff like trees. Obviously it could use some more variety and some of the buildings are sort of silly, but hopefully that will be hidden by lighting and materials once I get that far.

Click for big



sinc
Jul 6, 2008
Currently it just does whatever it wants, but I'm planning to add some controllability to it if I won't get too lazy. Like painting the average building height and other parameters on a large plane or something.

The method is roughly as follows. It scatters a bunch of points on the area, and forms a Delaunay-ish triangulation between them. It then computes a Voronoi grid from this (it's the dual), giving a cell-like pattern for the major roads. The in-between areas are then sliced with grid patterns a couple of times. Finally the area has been split into a bunch of roads and a bunch of flat polygons representing building shapes. These are then randomly combined with their neighbors and extruded in various ways. Some of the buildings are simple L-systems. The windows come from a procedural displacement shader.

I guess it could support all kinds of different layouts and stuff by slightly altering some of the steps, the Houdini networks keep it pretty flexible. I'm kind of eager to get to rendering now, though.

sinc
Jul 6, 2008

sigma 6 posted:

Badass. Are you following that DVD or did you just feel like doing this for fun?
The windows displacement is especially impressive IMHO.

Just for fun. :) I haven't seen the DVD, but I did get some ideas from googling around and also saw some screenshots from it.

sinc
Jul 6, 2008
I saw the photos of the recent Sarychev eruption (and some St. Helens ones) and thought they pretty much looked like iso-surfaces of a smoke simulation with some noisy displacement on top, so I couldn't resist trying in Houdini:



It's still at a pretty early stages, I'm just testing the idea. Probably it won't animate too well without some major extra thinking, but for a few second near-still devloped-stage clip it might work (these events take hours or days anyways). I guess my question is, does it still look like a reasonable massive ash column to you, or is it more like a bunch of weird noise?

sinc
Jul 6, 2008
Thanks for the comments. :) Yeah, the scale and context aren't very obvious at this point, because there's no attempt to model the surroundings yet. Also I'm planning to make it spread and skew at the top and to add a ton of secondary smoke floating around. So far I just kind of wanted to see if the approach of rendering the main plume as a solid has any potential.

I'm going for an effect like this: http://vulcan.wr.usgs.gov/Imgs/Jpg/MSH/Images/MSH80_msh_eruption_05-18-80_Krimmel_80S3-141_bw.jpg (warning, huge). At these sizes (even tens of kilometers tall) the smoke seems to have quite a solid and defined character, which would require a ridiculously small step size and massive resolution with the typical method. But I guess it'll still need stuff like translucency/SSS/etc to soften it.

Adbot
ADBOT LOVES YOU

sinc
Jul 6, 2008

Ratmann posted:

A good setup to use actually would be get the shape you want from a lower rez volume, and then use an uprez setup in dops to unrez the volume, that and you can use a displacement shader with the volume shader and that'll give it more detail.

Oh, I hadn't even thought you could use displacements with volume shaders, but really, why not. Should be worth examining.

Though still, I hope I'm not getting annoying here because obviously you guys don't seem to think this is a very fruitful approach... :) Here's a quick test, I comped my own smoke next to the one in my reference photo. Does it look different in actual context? It's not a perfect match of course, but at least to my eye it doesn't look too bad. Which sort of suggests that this could be made to work, at least with reasonable viewing distances.



And for comparison, the original:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply