Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ih8ualot
May 20, 2004
I like turkey and ham sandwiches
I'm building a raytracer! :iamafag:


Click here for the full 1500x1200 image.


It's from scratch, so the file is self-contained. No OpenGL or anything like that. This is one of the prettier programs I've written.

Adbot
ADBOT LOVES YOU

tripwire
Nov 19, 2004

        ghost flow

ih8ualot posted:

I'm building a raytracer! :iamafag:


Click here for the full 1500x1200 image.


It's from scratch, so the file is self-contained. No OpenGL or anything like that. This is one of the prettier programs I've written.
What language are you using?

avesik
Mar 22, 2007

Posting while polygonated
I've been working on a Facebook Chat application for Android in my spare time. You can find it on the Android Market under the name FBabble



It already has 8500 active users and I can say this is the first time I've made something that's used by so many people!

floWenoL
Oct 23, 2002

ih8ualot posted:

I'm building a raytracer! :iamafag:


Click here for the full 1500x1200 image.


It's from scratch, so the file is self-contained. No OpenGL or anything like that. This is one of the prettier programs I've written.

Why are your balls warped near the bottom?

ih8ualot
May 20, 2004
I like turkey and ham sandwiches

tripwire posted:

What language are you using?

C++, but I'm not using any fancy constructs or anything, so I imagine it could just have easily been written in C.

floWenoL posted:

Why are your balls warped near the bottom?

Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program.

Plus, my professor doesn't care what kind of lens I use.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

floWenoL posted:

Why are your balls warped near the bottom?

Hehehe, you said "balls". :twisted:

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

ih8ualot posted:


Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program.

Plus, my professor doesn't care what kind of lens I use.

pfft, wuss



V V V



:cool:

(Image Synthesis was my favorite class in my 4th year)

Hubis fucked around with this message at 23:14 on Apr 26, 2009

floWenoL
Oct 23, 2002

ih8ualot posted:

Because my camera has a flat lens. It gives those kinds of weird results, but it's a helluva lot easier to program.

Wouldn't the simplest thing to program be a simple pinhole camera model? :confused:

ih8ualot
May 20, 2004
I like turkey and ham sandwiches

Hubis posted:

pfft, wuss



V V V



:cool:

(Image Synthesis was my favorite class in my 4th year)

I love the title.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

ih8ualot posted:

I love the title.

48 hours of sleep deprivation debugging refractor-refractor interfaces is a hell of a thing.

Thots and Prayers
Jul 13, 2006

A is the for the atrocious abominated acts that YOu committed. A is also for ass-i-nine, eight, seven, and six.

B, b, b - b is for your belligerent, bitchy, bottomless state of affairs, but why?

C is for the cantankerous condition of our character, you have no cut-out.
Grimey Drawer
Sickr.org is an Open Data illness-tracking system.

Only registered members can see post attachments!

PrObLeM
May 13, 2004
Slacker?
http://nib.bz/ URL shortening and tracking.

Only registered members can see post attachments!

iopred
Aug 14, 2005

Heli Attack!
I've been working on a small IRC bot an hour or so every night for the last week. The interesting thing is it is in AS3/Flex, odd choice of language but I get some free stuff when it comes to doing images, such as my comic generator:



Which generates comics from chat based on what the bot deems to be 'funny'.

The Evan
Nov 29, 2004

Apple ][ Cash Money
^^^ I love this

I'm just about done on an NLP project for finding related concepts using wikipedia.

(thumbs)



Notice that it fails pretty badly on the second input.

BTW does anyone know if Interface Builder can hook into Python easily?

Supervillin
Feb 6, 2005

Pillbug

PrObLeM posted:

http://nib.bz/ URL shortening and tracking.

Part of me is asking why another URL shortening/tracking service. The rest of me is shutting that part up because I think nib.bz is a cooler name than the others and I'd use it.

Acer Pilot
Feb 17, 2007
put the 'the' in therapist

:dukedog:



Pfft. Who needs graphics anyway.

Lanky Dude
Oct 26, 2005

The Evan posted:

^^^ I love this

I'm just about done on an NLP project for finding related concepts using wikipedia.

(thumbs)



Notice that it fails pretty badly on the second input.

BTW does anyone know if Interface Builder can hook into Python easily?

PyObjC is actually pretty nice so far. You should be able to adapt some of the concepts from
http://lethain.com/entry/2008/aug/22/an-epic-introduction-to-pyobjc-and-cocoa/
to link a text view to the output of your app or something (depends what you wanna do)

Xerol
Jan 13, 2007


Putting a bunch of stuff together for a Generations update.

Warning: 1+MB PNGs ahead (in links):



Also working on a 9 minute HD video, so that'll be up soon(ish - it's going to take an hour to render and 3-4 to upload).

Quick rundown of new features for the next release, some still not yet implemented:

-Variety of color coding options: Classic (like 0.16 and prior), Aged (shown above), Monotone, and a couple others. Eventually want to turn it into an equation-based system so you can 'program' your own with various parameters (x,y,z cell location, age, born/died cells, etc.)

-More useful speed control. Right now this just lets you pick what speed to run at (or not run, you can stop the simulation entirely now). I'm not sure if this'll make it into the update, but I'm eventually going to add Reverse and Hyper modes - reverse will let you 'rewind' the simulation, Hyper will do more than one simulation update per display frame. Both require engine changes that, while not tremendous, are substantial and might get put off in favor of a timely release.

-"Save States" (think emulators), being able to clear the field, portions of the field, single cell add/delete, and changing rule sets mid-simulation. That last one may not get in this release as it'll tie in a bit with the Rewind feature (rewinding the simulation and then resuming can (by option) overwrite "future" layers with newly simulated ones).

-Better camera controls.

-More interface components, all options will have clickable controls. There will also be a menu with even more options, like more permanent saves.

At some point I might port the whole thing over to C#/DirectX from FreeBasic/OpenGL but that's a whole project in itself (I need to learn C#, first of all).

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Roflex posted:

Putting a bunch of stuff together for a Generations update.

Warning: 1+MB PNGs ahead (in links):



Also working on a 9 minute HD video, so that'll be up soon(ish - it's going to take an hour to render and 3-4 to upload).

Quick rundown of new features for the next release, some still not yet implemented:

-Variety of color coding options: Classic (like 0.16 and prior), Aged (shown above), Monotone, and a couple others. Eventually want to turn it into an equation-based system so you can 'program' your own with various parameters (x,y,z cell location, age, born/died cells, etc.)

-More useful speed control. Right now this just lets you pick what speed to run at (or not run, you can stop the simulation entirely now). I'm not sure if this'll make it into the update, but I'm eventually going to add Reverse and Hyper modes - reverse will let you 'rewind' the simulation, Hyper will do more than one simulation update per display frame. Both require engine changes that, while not tremendous, are substantial and might get put off in favor of a timely release.

-"Save States" (think emulators), being able to clear the field, portions of the field, single cell add/delete, and changing rule sets mid-simulation. That last one may not get in this release as it'll tie in a bit with the Rewind feature (rewinding the simulation and then resuming can (by option) overwrite "future" layers with newly simulated ones).

-Better camera controls.

-More interface components, all options will have clickable controls. There will also be a menu with even more options, like more permanent saves.

At some point I might port the whole thing over to C#/DirectX from FreeBasic/OpenGL but that's a whole project in itself (I need to learn C#, first of all).

Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.

leedo
Nov 28, 2000

Hubis posted:

Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.

I was going to recommend this as well.

shodanjr_gr
Nov 20, 2007
Here's a link to a small vid of my undergraduate disseration (as promised about a month ago :haw:).

Realistic underwater caustics and godrays (Rapidshare)

http://www.youtube.com/watch?v=khVuTTX8iy4

Sorry for the rapidshare, but I don't have access to our group's hosting at the moment in order to upload it (the current version of the project site is a bit out of date).

At some point in the future (after/if this gets published) I'll also post a link to the source + paper for those who care :P.

edit: Doh...totaly forgot about YouTube. Added a link, it should be up and running in a few minutes.
edit2: If the SD quality vid is not working (worked fine for some reason up until the HD vid becaome available), then watch the HD version.

shodanjr_gr fucked around with this message at 17:31 on May 7, 2009

tripwire
Nov 19, 2004

        ghost flow

shodanjr_gr posted:

Here's a link to a small vid of my undergraduate disseration (as promised about a month ago :haw:).

Realistic underwater caustics and godrays

Sorry for the rapidshare, but I don't have access to our group's hosting at the moment in order to upload it (the current version of the project site is a bit out of date).

At some point in the future (after/if this gets published) I'll also post a link to the source + paper for those who care :P.
Put it on youtube!

terminatusx
Jan 27, 2009

:megaman:Indie Game Dev and Bringer of the Apocalypse

shodanjr_gr posted:

Here's a link to a small vid of my undergraduate disseration (as promised about a month ago :haw:).

Realistic underwater caustics and godrays (Rapidshare)

http://www.youtube.com/watch?v=khVuTTX8iy4

holy crap :iia:

Mithaldu
Sep 25, 2007

Let's cuddle. :3:

From the outside the water looks REALLY impressive. You might want to do some tweaks though for the "in water" representation, specifically, light angle should be a bit off and light colour should include some red and yellow to make it more sun-like.

Xerol
Jan 13, 2007


Hubis posted:

Consider adding a (very) small degree of height-based 'fogging'; the human brain uses this to help determine distance and relative depth, and I think it would make seeing what's going on below the top layer much more intuitive.

Something like in the original version?



Also, video's up, I'll annotate it later: http://www.youtube.com/watch?v=T26nQX1Pc6g&fmt=22

shodanjr_gr
Nov 20, 2007

Mithaldu posted:

From the outside the water looks REALLY impressive. You might want to do some tweaks though for the "in water" representation, specifically, light angle should be a bit off and light colour should include some red and yellow to make it more sun-like.

At the moment it is simulating a "mid-noon" sun, so the light source is essentially directional and staring straight down. It's quite easy to change that around in the implementation (or switch the whole thing to a point light), but I just wanted to stick to the stuff mentioned in my paper (couldn't fit everything in).

You are right with regard to the sun color though. I'll try to fix that once I do my revision :).

Keep the comments coming :D.

quote:

Something like in the original version?
Yup. Just make it a bit stronger (you want the user to be able to tell apart the top 3-4 generations easily so its important that they differentiate color-wise).

shodanjr_gr fucked around with this message at 18:15 on May 7, 2009

heeen
May 14, 2005

CAT NEVER STOPS

shodanjr_gr posted:

At the moment it is simulating a "mid-noon" sun, so the light source is essentially directional and staring straight down. It's quite easy to change that around in the implementation (or switch the whole thing to a point light), but I just wanted to stick to the stuff mentioned in my paper (couldn't fit everything in).

You are right with regard to the sun color though. I'll try to fix that once I do my revision :).

Keep the comments coming :D.

Yup. Just make it a bit stronger (you want the user to be able to tell apart the top 3-4 generations easily so its important that they differentiate color-wise).

Nice, we were trying to get our paper finished for Siggraph, but we didn't make it deadline for submissions is monday.
Are your light rays actually raycasted?

shodanjr_gr
Nov 20, 2007

heeen posted:

Nice, we were trying to get our paper finished for Siggraph, but we didn't make it deadline for submissions is monday.

Bummer...Not making deadlines sucks...Where are you going to submit now?

I was lucky to be totally done with my course work so I could focus 100% on the paper (my advisor guided me a lot, but I did the writing myself and it took a bit more time than normal, considering it was my first :P), so we actually managed to submit it early (not to SIGGRAPH though).

quote:

Are your light rays actually raycasted?

Yup. Raycasted and intersected in image space using the ray-scene intersection algorithm presented in this paper caustics mapping .

shodanjr_gr fucked around with this message at 20:05 on May 8, 2009

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Back on this project again, playing around with global illumination stuff some more. Most implementations use photon mapping or patch-based radiosity with lightmaps, or real-time SSAO. I made my own method, which works at a much higher resolution, technically at the expense of distribution accuracy. Fortunately, since last time, I worked past that problem by normalizing the scene lighting every pass. Results improved considerably. :)

It's also twice as fast because I converted the main bottleneck to SSE, it now processes something like 400-600 samples per second on my Athlon 64 4000+.

Test map, 3 lights: The sun, plus two in the "building"


Click here for the full 1280x800 image.


(The black spot artifacts where solids intersect with the terrain as been fixed since that screenshot was taken)

OneEightHundred fucked around with this message at 21:32 on May 9, 2009

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
How about giving the sun outside a yellow tint to make the scene a tad more realistic? :)

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

Back on this project again, playing around with global illumination stuff some more.

Added photon normalization so the scene doesn't gain or lose light as pass count changes, and the results appear to be noticeably more convincing. It's also twice as fast because I converted the manifold correction code to SSE. :)

Test map, 3 lights: The sun, plus two in the "building"


Click here for the full 1280x800 image.


(The black spot artifacts where solids intersect with the terrain as been fixed since that screenshot was taken)

This is a continuation of your work to add radiosity-based light mapping to the Quake 3 engine, right?

Looks great :)

tripwire
Nov 19, 2004

        ghost flow

shodanjr_gr posted:

This is a continuation of your work to add radiosity-based light mapping to the Quake 3 engine, right?

Looks great :)

That sounds like an awesome if very difficult project. Cool screens so far!

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job.


It's actually not true radiosity, and I've consequently stopped calling that, but a convincing emulation of it:
- Small wide-FOV scene renders are snapped from every sample point
- Pixels from scene render are used to determine ambient contribution based on direction and manifold area. This is currently the major bottleneck, and converting it to SSE helped a lot.
- Contribution is combined with a recast of direct light influences
- All light is rescaled to produce the same total scene brightness as just the direct light contribution.
- Repeat

All passes but the final one are done at low resolution to reduce computation time.

The main difference between this and true radiosity is that radiosity normalizes per sample. My theory is that luminescence is uniform enough in real-world scenarios that global normalization will work fine.

shodanjr_gr
Nov 20, 2007
Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations!

Great minds think alike :D

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Actually, I had the exact same idea for an algorithm in order to speedup offline AO calculations!

Great minds think alike :D
This is, of course, why software patents are terrible, because no matter how clever your ideas seem, there's always someone else thinking of it.


Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo. :smith:

shodanjr_gr
Nov 20, 2007

OneEightHundred posted:

Hard part of this is going to be converting it into an actual presentable portfolio entry. I'm a programmer, I'm no good at this "art" poo poo. :smith:

Why is this that much of a problem?

Render the scene using your GI engine, get screenshots of the result. Render the scene using a raytracer that does photon mapping/radiosity/whatever, get screenshots of the result.

Put them side-by-side in your portofolio along with performance numbers.


You can also use some bog standard models/scenes that are familiar to everyone in the graphics industry, like the Cornell box, Stanford bunny etc.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Why is this that much of a problem?
Because why have this:


When you can have this:

shodanjr_gr
Nov 20, 2007
Because the second one may look more impressive, but there is so much going on it is hard to judge the quality of an algorithm from it.

Plus I doubt you can get a ray-traced reference image out of CryEngine 2 (which should serve as the standard for your algorithm).

As you said, you are a programmer and I suppose you are going to be judged as one, so I would not sweat the lack of "wow-ish" assets. And in the end, there are some freely available scenes out there that look quite impressive (I've seen Sponza.obj used quite a bit).

heeen
May 14, 2005

CAT NEVER STOPS

OneEightHundred posted:

I dunno this is one of those times I wish I had more art to work with so I could put out something more impressive. I also wish I wasn't too burned out to continue this work earlier, but hay, lost my job.


It's actually not true radiosity, and I've consequently stopped calling that, but a convincing emulation of it:
- Small wide-FOV scene renders are snapped from every sample point
- Pixels from scene render are used to determine ambient contribution based on direction and manifold area. This is currently the major bottleneck, and converting it to SSE helped a lot.
- Contribution is combined with a recast of direct light influences
- All light is rescaled to produce the same total scene brightness as just the direct light contribution.
- Repeat

All passes but the final one are done at low resolution to reduce computation time.

The main difference between this and true radiosity is that radiosity normalizes per sample. My theory is that luminescence is uniform enough in real-world scenarios that global normalization will work fine.

Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc?

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

heeen posted:

Can you give some detail on why your method is better/faster/more applicable than, say, q3map2 -light -bounce X etc?
q3map2's approach is conceptually a decent idea: It takes polygons and chops them up until the light gradient is low enough, then spawns area lights from them. The approach I'm using is just faster because doing a scene render and running a big SIMD multiply/accumulate over it faster than casting a light by several orders of magnitude.

Of course, q3map2 is further impaired by its light casting algorithm being slow AND scaling very poorly.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply