|
I'm building a raytracer! :iamafag: Click here for the full 1500x1200 image. It's from scratch, so the file is self-contained. No OpenGL or anything like that. This is one of the prettier programs I've written.
|
# ? Apr 25, 2009 23:46 |
|
|
# ? Apr 27, 2024 23:05 |
|
ih8ualot posted:I'm building a raytracer! :iamafag:
|
# ? Apr 26, 2009 00:46 |
|
I've been working on a Facebook Chat application for Android in my spare time. You can find it on the Android Market under the name FBabble It already has 8500 active users and I can say this is the first time I've made something that's used by so many people!
|
# ? Apr 26, 2009 02:46 |
|
ih8ualot posted:I'm building a raytracer! :iamafag: Why are your balls warped near the bottom?
|
# ? Apr 26, 2009 10:00 |
|
tripwire posted:What language are you using? C++, but I'm not using any fancy constructs or anything, so I imagine it could just have easily been written in C. floWenoL posted:Why are your balls warped near the bottom? Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program. Plus, my professor doesn't care what kind of lens I use.
|
# ? Apr 26, 2009 19:06 |
|
floWenoL posted:Why are your balls warped near the bottom? Hehehe, you said "balls".
|
# ? Apr 26, 2009 19:19 |
|
ih8ualot posted:
pfft, wuss V V V (Image Synthesis was my favorite class in my 4th year) Hubis fucked around with this message at 23:14 on Apr 26, 2009 |
# ? Apr 26, 2009 20:59 |
|
ih8ualot posted:Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program. Wouldn't the simplest thing to program be a simple pinhole camera model?
|
# ? Apr 27, 2009 02:16 |
|
Hubis posted:pfft, wuss I love the title.
|
# ? Apr 27, 2009 14:06 |
|
ih8ualot posted:I love the title. 48 hours of sleep deprivation debugging refractor-refractor interfaces is a hell of a thing.
|
# ? Apr 28, 2009 00:46 |
|
Sickr.org is an Open Data illness-tracking system.
|
# ? Apr 29, 2009 03:51 |
|
http://nib.bz/ URL shortening and tracking.
|
# ? Apr 29, 2009 05:21 |
|
I've been working on a small IRC bot an hour or so every night for the last week. The interesting thing is it is in AS3/Flex, odd choice of language but I get some free stuff when it comes to doing images, such as my comic generator: Which generates comics from chat based on what the bot deems to be 'funny'.
|
# ? Apr 30, 2009 19:32 |
|
^^^ I love this I'm just about done on an NLP project for finding related concepts using wikipedia. (thumbs) Notice that it fails pretty badly on the second input. BTW does anyone know if Interface Builder can hook into Python easily?
|
# ? May 1, 2009 00:39 |
|
PrObLeM posted:http://nib.bz/ URL shortening and tracking. Part of me is asking why another URL shortening/tracking service. The rest of me is shutting that part up because I think nib.bz is a cooler name than the others and I'd use it.
|
# ? May 1, 2009 01:24 |
|
Pfft. Who needs graphics anyway.
|
# ? May 6, 2009 03:41 |
|
The Evan posted:^^^ I love this PyObjC is actually pretty nice so far. You should be able to adapt some of the concepts from http://lethain.com/entry/2008/aug/22/an-epic-introduction-to-pyobjc-and-cocoa/ to link a text view to the output of your app or something (depends what you wanna do)
|
# ? May 6, 2009 14:09 |
|
Putting a bunch of stuff together for a Generations update. Warning: 1+MB PNGs ahead (in links): Also working on a 9 minute HD video, so that'll be up soon(ish - it's going to take an hour to render and 3-4 to upload). Quick rundown of new features for the next release, some still not yet implemented: -Variety of color coding options: Classic (like 0.16 and prior), Aged (shown above), Monotone, and a couple others. Eventually want to turn it into an equation-based system so you can 'program' your own with various parameters (x,y,z cell location, age, born/died cells, etc.) -More useful speed control. Right now this just lets you pick what speed to run at (or not run, you can stop the simulation entirely now). I'm not sure if this'll make it into the update, but I'm eventually going to add Reverse and Hyper modes - reverse will let you 'rewind' the simulation, Hyper will do more than one simulation update per display frame. Both require engine changes that, while not tremendous, are substantial and might get put off in favor of a timely release. -"Save States" (think emulators), being able to clear the field, portions of the field, single cell add/delete, and changing rule sets mid-simulation. That last one may not get in this release as it'll tie in a bit with the Rewind feature (rewinding the simulation and then resuming can (by option) overwrite "future" layers with newly simulated ones). -Better camera controls. -More interface components, all options will have clickable controls. There will also be a menu with even more options, like more permanent saves. At some point I might port the whole thing over to C#/DirectX from FreeBasic/OpenGL but that's a whole project in itself (I need to learn C#, first of all).
|
# ? May 7, 2009 08:28 |
|
Roflex posted:Putting a bunch of stuff together for a Generations update. Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.
|
# ? May 7, 2009 15:19 |
|
Hubis posted:Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive. I was going to recommend this as well.
|
# ? May 7, 2009 16:09 |
|
Here's a link to a small vid of my undergraduate disseration (as promised about a month ago ). Realistic underwater caustics and godrays (Rapidshare) http://www.youtube.com/watch?v=khVuTTX8iy4 Sorry for the rapidshare, but I don't have access to our group's hosting at the moment in order to upload it (the current version of the project site is a bit out of date). At some point in the future (after/if this gets published) I'll also post a link to the source + paper for those who care :P. edit: Doh...totaly forgot about YouTube. Added a link, it should be up and running in a few minutes. edit2: If the SD quality vid is not working (worked fine for some reason up until the HD vid becaome available), then watch the HD version. shodanjr_gr fucked around with this message at 17:31 on May 7, 2009 |
# ? May 7, 2009 16:30 |
|
shodanjr_gr posted:Here's a link to a small vid of my undergraduate disseration (as promised about a month ago ).
|
# ? May 7, 2009 16:57 |
|
shodanjr_gr posted:Here's a link to a small vid of my undergraduate disseration (as promised about a month ago ). holy crap
|
# ? May 7, 2009 17:55 |
|
shodanjr_gr posted:http://www.youtube.com/watch?v=khVuTTX8iy4 From the outside the water looks REALLY impressive. You might want to do some tweaks though for the "in water" representation, specifically, light angle should be a bit off and light colour should include some red and yellow to make it more sun-like.
|
# ? May 7, 2009 18:02 |
|
Hubis posted:Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive. Something like in the original version? Also, video's up, I'll annotate it later: http://www.youtube.com/watch?v=T26nQX1Pc6g&fmt=22
|
# ? May 7, 2009 18:02 |
|
Mithaldu posted:From the outside the water looks REALLY impressive. You might want to do some tweaks though for the "in water" representation, specifically, light angle should be a bit off and light colour should include some red and yellow to make it more sun-like. At the moment it is simulating a "mid-noon" sun, so the light source is essentially directional and staring straight down. It's quite easy to change that around in the implementation (or switch the whole thing to a point light), but I just wanted to stick to the stuff mentioned in my paper (couldn't fit everything in). You are right with regard to the sun color though. I'll try to fix that once I do my revision . Keep the comments coming . quote:Something like in the original version? shodanjr_gr fucked around with this message at 18:15 on May 7, 2009 |
# ? May 7, 2009 18:12 |
|
shodanjr_gr posted:At the moment it is simulating a "mid-noon" sun, so the light source is essentially directional and staring straight down. It's quite easy to change that around in the implementation (or switch the whole thing to a point light), but I just wanted to stick to the stuff mentioned in my paper (couldn't fit everything in). Nice, we were trying to get our paper finished for Siggraph, but we didn't make it deadline for submissions is monday. Are your light rays actually raycasted?
|
# ? May 8, 2009 19:16 |
|
heeen posted:Nice, we were trying to get our paper finished for Siggraph, but we didn't make it deadline for submissions is monday. Bummer...Not making deadlines sucks...Where are you going to submit now? I was lucky to be totally done with my course work so I could focus 100% on the paper (my advisor guided me a lot, but I did the writing myself and it took a bit more time than normal, considering it was my first :P), so we actually managed to submit it early (not to SIGGRAPH though). quote:Are your light rays actually raycasted? Yup. Raycasted and intersected in image space using the ray-scene intersection algorithm presented in this paper caustics mapping . shodanjr_gr fucked around with this message at 20:05 on May 8, 2009 |
# ? May 8, 2009 19:56 |
|
Back on this project again, playing around with global illumination stuff some more. Most implementations use photon mapping or patch-based radiosity with lightmaps, or real-time SSAO. I made my own method, which works at a much higher resolution, technically at the expense of distribution accuracy. Fortunately, since last time, I worked past that problem by normalizing the scene lighting every pass. Results improved considerably. It's also twice as fast because I converted the main bottleneck to SSE, it now processes something like 400-600 samples per second on my Athlon 64 4000+. Test map, 3 lights: The sun, plus two in the "building" Click here for the full 1280x800 image. (The black spot artifacts where solids intersect with the terrain as been fixed since that screenshot was taken) OneEightHundred fucked around with this message at 21:32 on May 9, 2009 |
# ? May 9, 2009 21:18 |
|
How about giving the sun outside a yellow tint to make the scene a tad more realistic?
|
# ? May 9, 2009 21:24 |
|
OneEightHundred posted:Back on this project again, playing around with global illumination stuff some more. This is a continuation of your work to add radiosity-based light mapping to the Quake 3 engine, right? Looks great
|
# ? May 9, 2009 21:28 |
|
shodanjr_gr posted:This is a continuation of your work to add radiosity-based light mapping to the Quake 3 engine, right? That sounds like an awesome if very difficult project. Cool screens so far!
|
# ? May 9, 2009 22:04 |
|
I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job. It's actually not true radiosity, and I've consequently stopped calling that, but a convincing emulation of it: - Small wide-FOV scene renders are snapped from every sample point - Pixels from scene render are used to determine ambient contribution based on direction and manifold area. This is currently the major bottleneck, and converting it to SSE helped a lot. - Contribution is combined with a recast of direct light influences - All light is rescaled to produce the same total scene brightness as just the direct light contribution. - Repeat All passes but the final one are done at low resolution to reduce computation time. The main difference between this and true radiosity is that radiosity normalizes per sample. My theory is that luminescence is uniform enough in real-world scenarios that global normalization will work fine.
|
# ? May 9, 2009 22:42 |
|
Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations! Great minds think alike
|
# ? May 9, 2009 23:15 |
|
shodanjr_gr posted:Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations! Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo.
|
# ? May 10, 2009 08:39 |
|
OneEightHundred posted:Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo. Why is this that much of a problem? Render the scene using your GI engine, get screenshots of the result. Render the scene using a raytracer that does photon mapping/radiosity/whatever, get screenshots of the result. Put them side-by-side in your portofolio along with performance numbers. You can also use some bog standard models/scenes that are familiar to everyone in the graphics industry, like the Cornell box, Stanford bunny etc.
|
# ? May 10, 2009 10:46 |
|
shodanjr_gr posted:Why is this that much of a problem? When you can have this:
|
# ? May 10, 2009 12:59 |
|
Because the second one may look more impressive, but there is so much going on it is hard to judge the quality of an algorithm from it. Plus I doubt you can get a ray-traced reference image out of CryEngine 2 (which should serve as the standard for your algorithm). As you said, you are a programmer and I suppose you are going to be judged as one, so I would not sweat the lack of "wow-ish" assets. And in the end, there are some freely available scenes out there that look quite impressive (I've seen Sponza.obj used quite a bit).
|
# ? May 10, 2009 13:29 |
|
OneEightHundred posted:I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job. Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc?
|
# ? May 10, 2009 13:38 |
|
|
# ? Apr 27, 2024 23:05 |
|
heeen posted:Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc? Of course, q3map2 is further impaired by its light casting algorithm being slow AND scaling very poorly.
|
# ? May 10, 2009 16:01 |