Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I decided the best way to make a Battlefield-ish game was to start with the Quake 3 engine and make a few minor modifications:



Vertex/pixel shader support, new material system, radiosity (which uses GPU acceleration to compile), directional lightmaps, terrain, skeletal models, load-time asset compilation, blah blah...

Oh yeah, and I suck at recruiting so I'm stuck in tech demo hell. :suicide:

OneEightHundred fucked around with this message at 03:55 on May 19, 2008

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ashgromnies posted:

How do you learn to do stuff like vertex shaders? I assume there's an "accepted" way of doing it. I'm really curious about all this neat graphics programming after spending years in Linux command-line hell.
Considering I'm self-taught on almost everything (plus exchanging ideas with other people who are better than me) I couldn't tell you the accepted way.

If anything I'd say the easiest way is to "just use them." They're not that hard to pick up on as long as you understand what they do, understand the math behind the operations you already do, and understand how the rendering pipeline works. OpenGL in particular makes it dirt easy, it's not much harder in D3D except you have to calculate the projection matrix yourself and they'll try tempting you to do everything with their lovely FX framework.

There are a metric shitload of samples out there, if anything I'd recommend downloading the Cg toolkit because it has a ton of useful samples (Cg is almost identical to HLSL and 90% the same as GLSL), more of the difficulty comes in understanding how processes like normalmapping and those stupid water shaders everyone has nerdgasms over work than understanding how to use them in place of existing systems.

I haven't read GPU Gems but I've seen a ton of good code snippets with it cited as the source so if there's one book to pick up on the subject, that's probably going to be it.

If you want to get out of command-line hell then the best place to start would be pick up GLUT or SDL. GLUT is easier to get in to, SDL scales better.

OneEightHundred fucked around with this message at 19:15 on May 19, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Wuen Ping posted:

Yes, we are using Ogre. I was very resistant to using it at first, because when I first looked at it, it did stupid things like reinvent a substantial portion of the STL.
I've heard a good number of horror stories about lovely STL implementations, which is probably why practically every major engine I look at has its own versions of STL-ish classes.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

tripwire posted:

Thanks, good stuff to know.
Are you able to use the GPU to speed vector math/physics yet? Have they got around to enabling that, because I seem to remember that being like the killer selling point of it.
The problem with accelerating physics calculation is the latency, which is still a bit too high for comfort when doing gameplay-critical physics.

Nuke Mexico posted:

NIH syndrome at its worst
It's not really NIH when flaws with STL are frequently cited in the reasons for making them.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Nuke Mexico posted:

To be fair though, every case of "Not-Invented-Here" Syndrome I've ever dealt with in the past by rationalization of "flaws" in the 3rd party system
Well, I meant fairly rational reasons, i.e. executable size reduction on the order of several megabytes, bloated interfaces that make extension difficult, and poor allocator support.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Null Pointer posted:

I've been working on a level editor for about a week.



Right now I only have sectors, extrusion and sector subdivision implemented, but later on it's going to support positive-space volumes, patches and path lofting.
Why bother with manual sector division anyway? Just support positive and negative volumes and you should be able to generate sectors by just generating a BSP tree from them.

quote:

It is kind of a mod, since it overwhelmingly uses graphics assets from Fallout:Tactics, but it has nothing to do with Tactics
You might want to make sure it requires Fallout Tactics in some way to avoid getting C&D'd. Supposedly Interplay's a rotting corpse right now, but they're still aggressively enforcing IP ownership.

OneEightHundred fucked around with this message at 09:29 on Jul 14, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I was sick of the ugly console interface for my map compiler so I made a front-end for it. And holy poo poo do I hate .NET now. (No, Microsoft, I do not want to make a delegate and a relay function and a method in the form just to change controls from other threads, nor do I want to write trivial conversion functions for converting char/wchar because ConvertAll doesn't have a default for trivial conversions, gently caress you)


Click here for the full 1680x1050 image.

OneEightHundred fucked around with this message at 04:49 on Aug 24, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
There we go. Color-corrected lighting and radiosity, much more accurate than doing it in gamma space:

Click here for the full 1031x800 image.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Is that real time radiosity?
No, it's baked into the lightmaps.

MasterSlowPoke posted:

Quake 3's kind of popular here.
The formats are easy, the engine's complete enough to make a game out of, which is what I'm trying to do. It's worlds easier to go with something that works and tune it to where it does what you need than to reinvent it all, unless there's some major fundamental flaw in the design.

The Quake 3 renderer got gutted a while ago, right now it's using my own rendering middleware thing (which contains no Quake 3 code or dependencies, the model viewer can load and render maps just fine for example), and the only rendering code left in Quake 3 is some basic setup stuff and some glue to the new renderer.

OneEightHundred fucked around with this message at 19:28 on Aug 24, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Bummer :(
Real-time radiosity is probably never going to become worthwhile, the memory requirements to get it to go anywhere near fast are insane, and any change in the scene geometry requires a massive amount of computation time.

The best solutions at the moment to good ambient lighting are to either bake the ambient term and do the direct lighting in real-time, or use SSAO. SSAO works so well right now that just about all of the real-time ambient lighting stuff is now being focused on it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Sagacity posted:

What about this?
Looks like that's done using a very sparse patch count and dynamically recalculating textures. The sectioning concept is actually kind of neat, since that would let you reduce memory requirements by a lot (transfers would be indexing a reduced target list instead of a global patch table), but the CPU needs are still pretty high. A core all to itself for 160 updates per second? Yikes. Looks like it calculates textures on the CPU so it probably hammers the bus as well.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Incrementing on my last post: Soft shadows.


Click here for the full 1680x1050 image.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

HicRic posted:

That looks lovely - what sort of techniques are you using to make it?
"Spawn more lights" :iamafag:

I can get away with it since direct lighting uses beamtree casting (which is really fast), for terrain I'll be using a blur kernel based on depth variance, since it's much slower and can actually record depth information properly which the beamtree caster can't really do without ridiculous memory consumption.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Back on this project again, playing around with global illumination stuff some more. Most implementations use photon mapping or patch-based radiosity with lightmaps, or real-time SSAO. I made my own method, which works at a much higher resolution, technically at the expense of distribution accuracy. Fortunately, since last time, I worked past that problem by normalizing the scene lighting every pass. Results improved considerably. :)

It's also twice as fast because I converted the main bottleneck to SSE, it now processes something like 400-600 samples per second on my Athlon 64 4000+.

Test map, 3 lights: The sun, plus two in the "building"


Click here for the full 1280x800 image.


(The black spot artifacts where solids intersect with the terrain as been fixed since that screenshot was taken)

OneEightHundred fucked around with this message at 21:32 on May 9, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job.


It's actually not true radiosity, and I've consequently stopped calling that, but a convincing emulation of it:
- Small wide-FOV scene renders are snapped from every sample point
- Pixels from scene render are used to determine ambient contribution based on direction and manifold area. This is currently the major bottleneck, and converting it to SSE helped a lot.
- Contribution is combined with a recast of direct light influences
- All light is rescaled to produce the same total scene brightness as just the direct light contribution.
- Repeat

All passes but the final one are done at low resolution to reduce computation time.

The main difference between this and true radiosity is that radiosity normalizes per sample. My theory is that luminescence is uniform enough in real-world scenarios that global normalization will work fine.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations!

Great minds think alike :D
This is, of course, why software patents are terrible, because no matter how clever your ideas seem, there's always someone else thinking of it.


Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo. :smith:

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Why is this that much of a problem?
Because why have this:


When you can have this:

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

heeen posted:

Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc?
q3map2's approach is conceptually a decent idea: It takes polygons and chops them up until the light gradient is low enough, then spawns area lights from them. The approach I'm using is just faster because doing a scene render and running a big SIMD multiply/accumulate over it faster than casting a light by several orders of magnitude.

Of course, q3map2 is further impaired by its light casting algorithm being slow AND scaling very poorly.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

heeen posted:

I see. How fast in actual numbers on, say the three lights from your screenshot, is your algorithm? I take it 300 samples per seconds means not realtime?
I think lighting that map takes about an hour, about half of that is the final terrain light pass (which uses the afforementioned slow, poorly-scaling algorithm).

It uses a 6"x6" lightmap resolution.

slovach posted:

What Q3 code base are you using as a base?
Vanilla. I didn't see much compelling reason to use a different base, I've already fixed the security issues IOQuake touched on.

quote:

How much of it is left? I remember Carmack somewhere saying about how he wanted to modernize the renderer a bit by adding support for VBO's, etc for Quake Live, but he didn't in the end because of time (i think was the reason).
The new renderer operates stand-alone, which is why the global illumination tool and model viewer are both perfectly capable of rendering maps.

All that's left of the original renderer is the WGL code, and 1500 lines of glue to bind the Quake 3 refresh API to the new renderer.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
After finally deciphering spherical harmonics to the point where I discovered order-1 spherical harmonics are ridiculously easy to implement, I've started converting lightmaps to use it instead of my existing approach.

They do a much better job of handling lighting influences from multiple directions, they work better with gloss, and they'll probably work better with texture compression too.


Click here for the full 1280x800 image.


:toot:

I'm actually kind of annoyed that I didn't use these sooner. Ironically, it turned out that order-1 SH is identical to an approach that I was going to try anyway, but didn't think would work.

OneEightHundred fucked around with this message at 03:25 on May 17, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

I'm a little confused -- what do you mean when you say you're using "spherical harmonics" for light-maps? Are you just unrolling the Theta and Phi coordinates into U and V?
Linear spherical harmonics doesn't need the theta/phi coordinates, because you can solve it using Cartesian coordinates. sin(phi), cos(phi), and cos(theta) are the axial values of the normalized sampling direction.

nDir = normalize(dir);
sampledValue = sh0 + nDir.x*sh1 + nDir.y*sh2 + nDir.z*sh3;

I think it's possible to solve the quadratic band in Cartesian space too, problem is almost all of the material I can find on SH uses polar coordinates. I'd try figuring out how to resolve it into Cartesian space, but 4 lightmaps per surface is already pushing it so gently caress it.

OneEightHundred fucked around with this message at 00:32 on May 19, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

Oh, cool -- I hadn't even realized you could linearize spherical harmonics like that. What's the benefit of using spherical harmonics for that? I've seen Wavelet compression used for textures to fantastic effect, usually preserving a lot more detail with fewer terms.
Mainly that it's cheap to evaluate and direction-independent. Apparently it's also possible to do a cheap low-frequency specular approximation with linear SH too, a trick Halo 3 uses: The directional components signify how much light is coming from an axial direction, so you can add up the RGB components for each axis and normalize the result to get a primary light direction, and since the constant component already represents the angle-independent amount of light hitting the surface, you can use that as the color and treat it as a directional light.

I jotted up a mini-article on the various lighting methods I've looked at for this, and the ones I settled on:
http://oneeightzerozero.blogspot.com/2009/05/lighting-is-everything.html

I wasn't even aware people had tried wavelet compression for lightmaps. How's that even work?

OneEightHundred fucked around with this message at 07:11 on May 21, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I swear the more I get done, the more it seems like I have left to do. :(

Oh well, at least world geometry shadows models now ...


Click here for the full 727x560 image.


... and I got the Mandatory Gimmicky Water Shader out of the way ...


Click here for the full 755x635 image.



(Yes, I know there's a line near the model's hip in the shadows image, the material system doesn't handle clamp-to-edge yet)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Shadow volumes are so 2004 :P.
That's using a cubemap depth texture actually. I'm still tinkering with some edge softening approaches.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Actually I think I just nailed down a technique that does a decent soft shadow approximation, I'll post screenshots once my desktop has Internet access again.

Otto Skorzeny posted:

Are you planning on adding antialiasing at any time? It looks real neat but the jaggies are a bit irksome
Yeah, but it's VERY far down the priority list because very little else depends on it. It's one of those things where it could be the LAST thing I implement and it wouldn't change the development process for the entire rest of the game at all.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
There, hacked up a shadow-softening technique.


Click here for the full 833x773 image.

OneEightHundred fucked around with this message at 08:18 on Jul 15, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

akadajet posted:

I think that errors sometimes look more interesting than the expected results.


Click here for the full 1296x758 image.

"Wachovia Logo Generator"

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Fixed a bunch of retarded bugs in my global illumination toy. The result was an extreme improvement in lighting quality. I think I got them all out because it actually looks really good now.

12" sample size + 1 bounce (a.k.a. lowest quality):


Click here for the full 640x400 image.


Diffuse light only:


Click here for the full 640x400 image.


(Yes I know there's a bright spot on one of the pillars. I know what causes it, I'm still working on fixing it.)

In other news, one of the test maps won't even run without my computer BSODing due to an infinite loop in ATI's lovely video drivers. :suicide:

OneEightHundred fucked around with this message at 10:52 on Sep 28, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Changed reflectance model to Cook-Torrance, it's extremely hard to illustrate it properly with a screenshot rather than a video, but gloss behaves much more convincingly now.

Tweaked volumetric fog, which now properly handles progressive density gradients at the fog surface rather than transitioning immediately from 0% to 100% density.


Click here for the full 640x400 image.


Demo of transitional volumetric fog on a test map with 8x density modifier, the "wedge" tip is where the top of the fog volume starts.


Click here for the full 640x400 image.


Unfortunately I'm not going to be implementing NS2 style shadow casting within fog volumes because I'm trying to spend LESS time on renderer features and move on.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Yeah, there's a water surface about 4" above the "top" floor portion.

And yeah I know the color looks like vaporized poo poo, it was originally a paler "muddy water" type color except then I added gamma correction which turned it into a more orange color and I haven't fixed it yet.

OneEightHundred fucked around with this message at 23:32 on Nov 10, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I've been tinkering with non-photorealistic rendering a bit to see if it's worthwhile, with somewhat interesting results. I need to delve a bit deeper into the theory behind this stuff if I'm really going to take it all the way, but one thing that's fairly common in color comic book art is a combination of stark delineations between light and shadow, but also "airbrushed" subtle tone changes within the light and dark areas.

I think this can be approximated in the lighting model:


Click here for the full 640x400 image.


Work in progress, and I don't think this is the greatest example since it would probably work a lot better with pastel textures than busy photorealistic ones.

OneEightHundred fucked around with this message at 13:31 on Nov 25, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Dubious update: Finally loving got quadratic SH working, also radiosity now uses a three-sample capture to get a full hemispherical degree manifold, meaning it handles contributions from shallow angles properly and overall looks exactly like it should. I've been considering going to photon mapping instead but this is so fast I really don't care.


Click here for the full 1280x800 image.


This was a terribly uninspiring chore for a number of reasons:
- Rotating an SH vector by a matrix is "easy", yet surprisingly difficult to find the formulae for. I finally dug it out of a thoroughly buried Bungie presentation, and have yet to find the formulae elsewhere.
- The Bungie version is apparently flipped on the Z axis for some reason.
- One of the formulae in the Sony paper is wrong.
- Most existing publications and code samples have numerous "mystery values", and still factor things by pi for some incomprehensible reason.

This was so annoying to get working I decided to post the "spoilers" and spare anyone else the frustration:
- Code to calculate the coefs for Lambertian reflectance based on a directional light.
- Code to rotate an SH vector using a 3x3 matrix.
- Code to sample the SH vector using Cartesian coordinates.
- No "mystery values"
- Eliminated sqrt(3) from coef 2,0 by premultiplying the relevent coefficient and dividing during sampling, so all constants are now rational numbers.

As for why you'd want to use quadratic... because it's very accurate, of course!

OneEightHundred fucked around with this message at 16:34 on Jan 20, 2010

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ColdPie posted:

Direct3D has a backwards Z-axis, and Bungie does all of their work on MS systems, so they probably do all of their work with a backwards Z-axis.
There isn't really any reason to do SH calculations in a different coordinate system from the rest of the world though.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

midnite posted:

Question: What do you use for generating the UVs for your lightmaps? Did you write something yourself? or...
The maps are still based on the Q3 format, so it generates them automatically per-surface. I modified it to support recharting and allocate space more efficiently, but otherwise it's mostly unchanged.

quote:

Also, have you seen this website:
http://www.paulsprojects.net/index.html
Yeah, that's based on the Sony paper primarily. I hit up practically every site on the topic figuring that stuff out, the ONLY one with the matrix rotation code I've found were the Bungie slides, which they incidentally DIDN'T put in the version on their main publications page.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Turns out encoding horizon levels using a radial Fourier transform lets you do self-shadowing textures extremely cheaply:


Click here for the full 789x530 image.


Well, if 5 bytes of storage per texel is cheap, but I guess you don't have to use it for everything so whatever. I'm sure someone else has invented this already. Banding's due to my laptop's lovely video card. Fixed the banding! :toot:

edit: Posted article describing it:
http://codedeposit.blogspot.com/2010/02/self-texturing-shadows-using-radial.html

OneEightHundred fucked around with this message at 06:52 on Feb 11, 2010

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The horizon level thing for self-shadowing was actually originally intended for models to overcome some limitations of PRT (i.e. PRT sucks with normalmapping) and hopefully develop a cheap way of emulating self-shadowing without being forced to use high-resolution shadowmaps. Fortunately, it works okay for that too:



Done the same way, except per-vertex. It's actually reasonably fast to calculate since you can determine the horizon level of a triangle by intersecting each edge with a plane coplanar with the vertex normal and a radial direction.

(Grayscale because my model viewer doesn't really have a way to toggle scene parameters at the moment).

OneEightHundred fucked around with this message at 05:42 on Feb 13, 2010

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

bitreaper posted:

Spent a few months working on this, going back to it once I finish all the arts courses they want me to take to round out my degree...
This is pretty awesome, have you considered trying to make it stand-alone for game integration if it's capable of running in real time?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Scaevolus posted:

In the video it seemed to vary from .5 to 20 fps.
The parameters varied wildly though. I guess the question is if it could be scaled down enough to run in a game environment without losing too much of the effect.

Particle systems that simulate turbulence are still a fairly elusive thing at the moment, consider that fire/flame/smoke are some of the main things that stick out like a sore thumb as looking cheesy and unrealistic even in otherwise extremely well-produced games.

OneEightHundred fucked around with this message at 15:44 on Apr 5, 2010

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Pfhreak posted:

I've always enjoyed watching sorting algorithms at work.
The old Mac QuickBasic sample one played sound too, with pitch corresponding to the value of sorted elements.

Heap sort was pretty trippy.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

antpocas posted:

...Vuvuzela Hero?
I think the best way to do Vuvuzela Hero would be trying to stop 6 Vuvuzelas from playing instead of making them play.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply