|
I decided the best way to make a Battlefield-ish game was to start with the Quake 3 engine and make a few minor modifications: Vertex/pixel shader support, new material system, radiosity (which uses GPU acceleration to compile), directional lightmaps, terrain, skeletal models, load-time asset compilation, blah blah... Oh yeah, and I suck at recruiting so I'm stuck in tech demo hell. OneEightHundred fucked around with this message at 03:55 on May 19, 2008 |
# ¿ May 19, 2008 03:05 |
|
|
# ¿ May 11, 2024 10:31 |
|
ashgromnies posted:How do you learn to do stuff like vertex shaders? I assume there's an "accepted" way of doing it. I'm really curious about all this neat graphics programming after spending years in Linux command-line hell. If anything I'd say the easiest way is to "just use them." They're not that hard to pick up on as long as you understand what they do, understand the math behind the operations you already do, and understand how the rendering pipeline works. OpenGL in particular makes it dirt easy, it's not much harder in D3D except you have to calculate the projection matrix yourself and they'll try tempting you to do everything with their lovely FX framework. There are a metric shitload of samples out there, if anything I'd recommend downloading the Cg toolkit because it has a ton of useful samples (Cg is almost identical to HLSL and 90% the same as GLSL), more of the difficulty comes in understanding how processes like normalmapping and those stupid water shaders everyone has nerdgasms over work than understanding how to use them in place of existing systems. I haven't read GPU Gems but I've seen a ton of good code snippets with it cited as the source so if there's one book to pick up on the subject, that's probably going to be it. If you want to get out of command-line hell then the best place to start would be pick up GLUT or SDL. GLUT is easier to get in to, SDL scales better. OneEightHundred fucked around with this message at 19:15 on May 19, 2008 |
# ¿ May 19, 2008 19:03 |
|
Wuen Ping posted:Yes, we are using Ogre. I was very resistant to using it at first, because when I first looked at it, it did stupid things like reinvent a substantial portion of the STL.
|
# ¿ Jun 8, 2008 23:32 |
|
tripwire posted:Thanks, good stuff to know. Nuke Mexico posted:NIH syndrome at its worst
|
# ¿ Jun 10, 2008 10:41 |
|
Nuke Mexico posted:To be fair though, every case of "Not-Invented-Here" Syndrome I've ever dealt with in the past by rationalization of "flaws" in the 3rd party system
|
# ¿ Jun 11, 2008 05:28 |
|
Null Pointer posted:I've been working on a level editor for about a week. quote:It is kind of a mod, since it overwhelmingly uses graphics assets from Fallout:Tactics, but it has nothing to do with Tactics OneEightHundred fucked around with this message at 09:29 on Jul 14, 2008 |
# ¿ Jul 14, 2008 09:26 |
|
I was sick of the ugly console interface for my map compiler so I made a front-end for it. And holy poo poo do I hate .NET now. (No, Microsoft, I do not want to make a delegate and a relay function and a method in the form just to change controls from other threads, nor do I want to write trivial conversion functions for converting char/wchar because ConvertAll doesn't have a default for trivial conversions, gently caress you) Click here for the full 1680x1050 image. OneEightHundred fucked around with this message at 04:49 on Aug 24, 2008 |
# ¿ Aug 24, 2008 00:16 |
|
There we go. Color-corrected lighting and radiosity, much more accurate than doing it in gamma space: Click here for the full 1031x800 image.
|
# ¿ Aug 24, 2008 09:17 |
|
shodanjr_gr posted:Is that real time radiosity? MasterSlowPoke posted:Quake 3's kind of popular here. The Quake 3 renderer got gutted a while ago, right now it's using my own rendering middleware thing (which contains no Quake 3 code or dependencies, the model viewer can load and render maps just fine for example), and the only rendering code left in Quake 3 is some basic setup stuff and some glue to the new renderer. OneEightHundred fucked around with this message at 19:28 on Aug 24, 2008 |
# ¿ Aug 24, 2008 19:03 |
|
shodanjr_gr posted:Bummer The best solutions at the moment to good ambient lighting are to either bake the ambient term and do the direct lighting in real-time, or use SSAO. SSAO works so well right now that just about all of the real-time ambient lighting stuff is now being focused on it.
|
# ¿ Aug 24, 2008 20:38 |
|
Sagacity posted:What about this?
|
# ¿ Aug 25, 2008 19:17 |
|
Incrementing on my last post: Soft shadows. Click here for the full 1680x1050 image.
|
# ¿ Sep 7, 2008 07:46 |
|
HicRic posted:That looks lovely - what sort of techniques are you using to make it? I can get away with it since direct lighting uses beamtree casting (which is really fast), for terrain I'll be using a blur kernel based on depth variance, since it's much slower and can actually record depth information properly which the beamtree caster can't really do without ridiculous memory consumption.
|
# ¿ Sep 7, 2008 19:10 |
|
Back on this project again, playing around with global illumination stuff some more. Most implementations use photon mapping or patch-based radiosity with lightmaps, or real-time SSAO. I made my own method, which works at a much higher resolution, technically at the expense of distribution accuracy. Fortunately, since last time, I worked past that problem by normalizing the scene lighting every pass. Results improved considerably. It's also twice as fast because I converted the main bottleneck to SSE, it now processes something like 400-600 samples per second on my Athlon 64 4000+. Test map, 3 lights: The sun, plus two in the "building" Click here for the full 1280x800 image. (The black spot artifacts where solids intersect with the terrain as been fixed since that screenshot was taken) OneEightHundred fucked around with this message at 21:32 on May 9, 2009 |
# ¿ May 9, 2009 21:18 |
|
I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job. It's actually not true radiosity, and I've consequently stopped calling that, but a convincing emulation of it: - Small wide-FOV scene renders are snapped from every sample point - Pixels from scene render are used to determine ambient contribution based on direction and manifold area. This is currently the major bottleneck, and converting it to SSE helped a lot. - Contribution is combined with a recast of direct light influences - All light is rescaled to produce the same total scene brightness as just the direct light contribution. - Repeat All passes but the final one are done at low resolution to reduce computation time. The main difference between this and true radiosity is that radiosity normalizes per sample. My theory is that luminescence is uniform enough in real-world scenarios that global normalization will work fine.
|
# ¿ May 9, 2009 22:42 |
|
shodanjr_gr posted:Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations! Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo.
|
# ¿ May 10, 2009 08:39 |
|
shodanjr_gr posted:Why is this that much of a problem? When you can have this:
|
# ¿ May 10, 2009 12:59 |
|
heeen posted:Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc? Of course, q3map2 is further impaired by its light casting algorithm being slow AND scaling very poorly.
|
# ¿ May 10, 2009 16:01 |
|
heeen posted:I see. How fast in actual numbers on, say the three lights from your screenshot, is your algorithm? I take it 300 samples per seconds means not realtime? It uses a 6"x6" lightmap resolution. slovach posted:What Q3 code base are you using as a base? quote:How much of it is left? I remember Carmack somewhere saying about how he wanted to modernize the renderer a bit by adding support for VBO's, etc for Quake Live, but he didn't in the end because of time (i think was the reason). All that's left of the original renderer is the WGL code, and 1500 lines of glue to bind the Quake 3 refresh API to the new renderer.
|
# ¿ May 11, 2009 00:00 |
|
After finally deciphering spherical harmonics to the point where I discovered order-1 spherical harmonics are ridiculously easy to implement, I've started converting lightmaps to use it instead of my existing approach. They do a much better job of handling lighting influences from multiple directions, they work better with gloss, and they'll probably work better with texture compression too. Click here for the full 1280x800 image. I'm actually kind of annoyed that I didn't use these sooner. Ironically, it turned out that order-1 SH is identical to an approach that I was going to try anyway, but didn't think would work. OneEightHundred fucked around with this message at 03:25 on May 17, 2009 |
# ¿ May 17, 2009 02:38 |
|
Hubis posted:I'm a little confused -- what do you mean when you say you're using "spherical harmonics" for light-maps? Are you just unrolling the Theta and Phi coordinates into U and V? nDir = normalize(dir); sampledValue = sh0 + nDir.x*sh1 + nDir.y*sh2 + nDir.z*sh3; I think it's possible to solve the quadratic band in Cartesian space too, problem is almost all of the material I can find on SH uses polar coordinates. I'd try figuring out how to resolve it into Cartesian space, but 4 lightmaps per surface is already pushing it so gently caress it. OneEightHundred fucked around with this message at 00:32 on May 19, 2009 |
# ¿ May 19, 2009 00:24 |
|
Hubis posted:Oh, cool -- I hadn't even realized you could linearize spherical harmonics like that. What's the benefit of using spherical harmonics for that? I've seen Wavelet compression used for textures to fantastic effect, usually preserving a lot more detail with fewer terms. I jotted up a mini-article on the various lighting methods I've looked at for this, and the ones I settled on: http://oneeightzerozero.blogspot.com/2009/05/lighting-is-everything.html I wasn't even aware people had tried wavelet compression for lightmaps. How's that even work? OneEightHundred fucked around with this message at 07:11 on May 21, 2009 |
# ¿ May 21, 2009 06:07 |
|
I swear the more I get done, the more it seems like I have left to do. Oh well, at least world geometry shadows models now ... Click here for the full 727x560 image. ... and I got the Mandatory Gimmicky Water Shader out of the way ... Click here for the full 755x635 image. (Yes, I know there's a line near the model's hip in the shadows image, the material system doesn't handle clamp-to-edge yet)
|
# ¿ Jul 13, 2009 23:45 |
|
shodanjr_gr posted:Shadow volumes are so 2004 :P.
|
# ¿ Jul 14, 2009 20:03 |
|
Actually I think I just nailed down a technique that does a decent soft shadow approximation, I'll post screenshots once my desktop has Internet access again.Otto Skorzeny posted:Are you planning on adding antialiasing at any time? It looks real neat but the jaggies are a bit irksome
|
# ¿ Jul 14, 2009 21:24 |
|
There, hacked up a shadow-softening technique. Click here for the full 833x773 image. OneEightHundred fucked around with this message at 08:18 on Jul 15, 2009 |
# ¿ Jul 15, 2009 02:13 |
|
akadajet posted:I think that errors sometimes look more interesting than the expected results.
|
# ¿ Aug 31, 2009 14:32 |
|
Fixed a bunch of retarded bugs in my global illumination toy. The result was an extreme improvement in lighting quality. I think I got them all out because it actually looks really good now. 12" sample size + 1 bounce (a.k.a. lowest quality): Click here for the full 640x400 image. Diffuse light only: Click here for the full 640x400 image. (Yes I know there's a bright spot on one of the pillars. I know what causes it, I'm still working on fixing it.) In other news, one of the test maps won't even run without my computer BSODing due to an infinite loop in ATI's lovely video drivers. OneEightHundred fucked around with this message at 10:52 on Sep 28, 2009 |
# ¿ Sep 28, 2009 10:34 |
|
Changed reflectance model to Cook-Torrance, it's extremely hard to illustrate it properly with a screenshot rather than a video, but gloss behaves much more convincingly now. Tweaked volumetric fog, which now properly handles progressive density gradients at the fog surface rather than transitioning immediately from 0% to 100% density. Click here for the full 640x400 image. Demo of transitional volumetric fog on a test map with 8x density modifier, the "wedge" tip is where the top of the fog volume starts. Click here for the full 640x400 image. Unfortunately I'm not going to be implementing NS2 style shadow casting within fog volumes because I'm trying to spend LESS time on renderer features and move on.
|
# ¿ Nov 10, 2009 01:21 |
|
Yeah, there's a water surface about 4" above the "top" floor portion. And yeah I know the color looks like vaporized poo poo, it was originally a paler "muddy water" type color except then I added gamma correction which turned it into a more orange color and I haven't fixed it yet. OneEightHundred fucked around with this message at 23:32 on Nov 10, 2009 |
# ¿ Nov 10, 2009 20:15 |
|
I've been tinkering with non-photorealistic rendering a bit to see if it's worthwhile, with somewhat interesting results. I need to delve a bit deeper into the theory behind this stuff if I'm really going to take it all the way, but one thing that's fairly common in color comic book art is a combination of stark delineations between light and shadow, but also "airbrushed" subtle tone changes within the light and dark areas. I think this can be approximated in the lighting model: Click here for the full 640x400 image. Work in progress, and I don't think this is the greatest example since it would probably work a lot better with pastel textures than busy photorealistic ones. OneEightHundred fucked around with this message at 13:31 on Nov 25, 2009 |
# ¿ Nov 25, 2009 12:51 |
|
Dubious update: Finally loving got quadratic SH working, also radiosity now uses a three-sample capture to get a full hemispherical degree manifold, meaning it handles contributions from shallow angles properly and overall looks exactly like it should. I've been considering going to photon mapping instead but this is so fast I really don't care. Click here for the full 1280x800 image. This was a terribly uninspiring chore for a number of reasons: - Rotating an SH vector by a matrix is "easy", yet surprisingly difficult to find the formulae for. I finally dug it out of a thoroughly buried Bungie presentation, and have yet to find the formulae elsewhere. - The Bungie version is apparently flipped on the Z axis for some reason. - One of the formulae in the Sony paper is wrong. - Most existing publications and code samples have numerous "mystery values", and still factor things by pi for some incomprehensible reason. This was so annoying to get working I decided to post the "spoilers" and spare anyone else the frustration: - Code to calculate the coefs for Lambertian reflectance based on a directional light. - Code to rotate an SH vector using a 3x3 matrix. - Code to sample the SH vector using Cartesian coordinates. - No "mystery values" - Eliminated sqrt(3) from coef 2,0 by premultiplying the relevent coefficient and dividing during sampling, so all constants are now rational numbers. As for why you'd want to use quadratic... because it's very accurate, of course! OneEightHundred fucked around with this message at 16:34 on Jan 20, 2010 |
# ¿ Jan 20, 2010 15:41 |
|
ColdPie posted:Direct3D has a backwards Z-axis, and Bungie does all of their work on MS systems, so they probably do all of their work with a backwards Z-axis.
|
# ¿ Jan 21, 2010 08:03 |
|
midnite posted:Question: What do you use for generating the UVs for your lightmaps? Did you write something yourself? or... quote:Also, have you seen this website:
|
# ¿ Jan 23, 2010 19:34 |
|
Turns out encoding horizon levels using a radial Fourier transform lets you do self-shadowing textures extremely cheaply: Click here for the full 789x530 image. Well, if 5 bytes of storage per texel is cheap, but I guess you don't have to use it for everything so whatever. I'm sure someone else has invented this already. edit: Posted article describing it: http://codedeposit.blogspot.com/2010/02/self-texturing-shadows-using-radial.html OneEightHundred fucked around with this message at 06:52 on Feb 11, 2010 |
# ¿ Feb 11, 2010 04:52 |
|
The horizon level thing for self-shadowing was actually originally intended for models to overcome some limitations of PRT (i.e. PRT sucks with normalmapping) and hopefully develop a cheap way of emulating self-shadowing without being forced to use high-resolution shadowmaps. Fortunately, it works okay for that too: Done the same way, except per-vertex. It's actually reasonably fast to calculate since you can determine the horizon level of a triangle by intersecting each edge with a plane coplanar with the vertex normal and a radial direction. (Grayscale because my model viewer doesn't really have a way to toggle scene parameters at the moment). OneEightHundred fucked around with this message at 05:42 on Feb 13, 2010 |
# ¿ Feb 13, 2010 05:17 |
|
bitreaper posted:Spent a few months working on this, going back to it once I finish all the arts courses they want me to take to round out my degree...
|
# ¿ Apr 5, 2010 13:58 |
|
Scaevolus posted:In the video it seemed to vary from .5 to 20 fps. Particle systems that simulate turbulence are still a fairly elusive thing at the moment, consider that fire/flame/smoke are some of the main things that stick out like a sore thumb as looking cheesy and unrealistic even in otherwise extremely well-produced games. OneEightHundred fucked around with this message at 15:44 on Apr 5, 2010 |
# ¿ Apr 5, 2010 15:25 |
|
Pfhreak posted:I've always enjoyed watching sorting algorithms at work. Heap sort was pretty trippy.
|
# ¿ May 14, 2010 05:46 |
|
|
# ¿ May 11, 2024 10:31 |
|
antpocas posted:...Vuvuzela Hero?
|
# ¿ Jul 2, 2010 23:27 |