Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
toiletbrush
May 17, 2010
I've been interested in ray-tracing for years, and always wanted to write my own. So tonight I finally sat down and started writing one, deliberately not reading any articles or anything about it, to see how far I can get on my basic understanding.

This is it so far, it only does the first intersection at the moment


And with a dodgy as heck simulation of DOF...


This weekend I'll get it doing multiple bounces.

Adbot
ADBOT LOVES YOU

toiletbrush
May 17, 2010

Luigi Thirty posted:

update: i got a uart running and figured out how to put hex digits on my 4 digit display that come in over the wire

soon i'll be writing my own risc cpu
when you write a book I'll order 100 copies

Path tracer progress...




toiletbrush fucked around with this message at 21:02 on Nov 24, 2018

toiletbrush
May 17, 2010
More Swift path tracer progress...







it's really slow and noisy, about 5 seconds to do a single 8-bounce sample for the whole image, time to profile!

toiletbrush fucked around with this message at 18:21 on Nov 30, 2018

toiletbrush
May 17, 2010
I'm writing an app that emulates the display modes of a bunch of 8-bit micros, like the C64 and spectrum. This is it emulating a C64 in multicolour mode, where you only get 1/2 horizontal resolution, but can choose 3 colours per 2x4 pixel block...









It turns out choosing what two or three colours to use is way harder than I thought

toiletbrush
May 17, 2010
ZX Spectrum Arnie


Amiga HAM fail (was calculating which component to modify wrong)


Amiga HAM success!

toiletbrush
May 17, 2010

Corla Plankun posted:

if you threw a fake CRT simulating blur-bend thing on those i bet they would look :krad:
Good idea! There's no glare or curve yet, but I've added a crappy RGB grill effect and also an attempt to emulate the horizontal blurring/ghosting...










Your telly's hosed mate!

toiletbrush
May 17, 2010

Wheany posted:

Doesn't c64 have the global background color as the 4th color per block? You could take the 8x8 pixel area of the original image and find the smallest tetrahedron of c64 colors that contains all the colors from the original 8x8 block. One of the points of the tetrahedron would always be the global bg color, but the other 3 could be chosen freely.
you're right, it's four colours! but one of those four has to be the same for all 4x8 blocks?

toiletbrush
May 17, 2010
excellent HAM chat

echinopsis posted:

does this fake HAM in a way that is faithful to what HAM was or just add garbled colour to fake it?
its properly emulating HAM, but I'm not being very clever about how I choose the base 16 colours



and fixed c64 version


fukit, speccy version too

toiletbrush fucked around with this message at 00:17 on Aug 1, 2019

toiletbrush
May 17, 2010

Doc Block posted:

lua's interpreter seems pretty solid IMHO. the nice thing about it is that i can rip out everything without having to touch lua's source code, and then only allow stuff that's "safe", all through the interpreter's C API.

yeah, but can i write a better interpreter? probably not. and we still come back to having to design and write a high-ish level language that's also easy for newbies.
I've never used Lua so this might be a dumb point but is there a one-to-one mapping between Lua instructions and the byte code it produces? I'm guessing not and that might make it a bit hard for people to figure out their script budget per timestep.

Writing your own interpreter for a simple language could be fun, and might even be easier for noobs to pick up than Lua if it's designed well with the right abstractions and whatnot. Plus you'd have enough control for crazy stuff like letting people spend some sort of budget on 'hardware' like extra/better sensors or an FPU that takes less cycles to do SQRTs or whatever.

quote:

yeah, my thought was to randomize the execution order every logic frame or X number of logic frames.
Is it not possible to run them all with the exact same model/state of the world to figure out their actions, then just have some conflict resolution step when the actions are performed?

toiletbrush fucked around with this message at 13:34 on Sep 7, 2019

toiletbrush
May 17, 2010

AtomD posted:

over the past couple of months ive persitently made the dumb decision of using a daw instead of learning notation software


https://soundcloud.com/atomd/overture-for-the-botanist

the biggest issue i'm having is having to manually gently caress with instrument dynamics and tempo using automation curves to try and get what i'm after. i'm hoping to try and get this into something like dorico to see if all that poo poo is more consistent.
it sounds a bit dry and non-reverby imho but thats pretty drat awesome all the same

toiletbrush
May 17, 2010

AtomD posted:

this is my first time trying out convolution reverb. i'm going mess with some dials.
Nice! Convolution reverbs are awesome, just make sure you use one that lets you use regular audio samples as impulse responses and then go nuts recording/synthesising your own cos thats most of the fun.

toiletbrush
May 17, 2010
I've been spending the last few days writing an assembly-writing game (inspired by zacktronics obvs) for iOS. I should caveat this with the fact most of what I know about assembly comes from writing CHIP-8 emulators.

Emulated cpu features...
- one byte per instruction, including operand
- 16 bytes of ram! (see above)
- unlimited program rom
- separate 16 byte, badly behaved stack that wraps around when pushing/popping, sets flags when it does, if you care! Can be any size, though, really
- no registers except PC/SP
- three flags!
- 32 instructions, mostly familiar but many with subtle differences

The reason I wanted a super simple instruction set is that the game generates its own puzzles - basically generating random code and testing it until it finds a program that terminates and produces 'interesting' outputs given its inputs. Then you're given the inputs and corresponding outputs and have to recreate it them.

It's probably going to be a terrible game because he programs it generates are pretty polarised between 'outputs the input + 1' and 'outputs reams of intractable garbage', but it's getting better, and its really fun working on the instruction set, changing it to make it more efficient or flexible or whatever. Also it's quite fun generating images based on two-parameter functions. A random bit of code just generated this...


Here's a program to find the average of the input (yes it will overflow and gently caress up but lol)

toiletbrush fucked around with this message at 14:24 on Oct 10, 2020

toiletbrush
May 17, 2010
Ive got a basic text renderer working for my asm game, next up is adding a crt effect and actually editing the code...

toiletbrush
May 17, 2010
spent some more time on asm game tonight, now you can edit code! The ui is sort of hellish but I'm not sure how else to do it



I'm definitely gonna speed up some of that drawing, too

toiletbrush
May 17, 2010

orly posted:

For my pandemic music compositions, I finally fell into the vaporwave / city pop youtube sinkhole and had to do something about it.

https://www.youtube.com/watch?v=PScK5Fbi-xs
this makes me want to play Secret of Monkey Island 2 for some reason

toiletbrush
May 17, 2010

Stack Machine posted:

I make bad games and demos.

Getting real-time music synthesis and sprite rendering to work on my mac plus (holy poo poo that thing is slow) was a lot of fun. But I'm a huge dork, so...
I've been watching a ton of c64/amiga demoscene stuff lately and been thinking about getting a c64 emulator and learning enough 6502 to make recreate a few effects.

I love trying to figure out how amiga demo effects work but there's so much more to learn vs c64
https://www.youtube.com/watch?v=pYtleuGV7ok

toiletbrush
May 17, 2010
I'm going to write a Laser Squad clone, for the ~17th time, except this time I'm definitely going to finish it. Definitely.

toiletbrush
May 17, 2010

Luigi Thirty posted:

loving around with QuickDraw 3D

https://twitter.com/LuigiThirty/status/1440604185548902404

also have a Performa 6360 coming to serve as a remote debugging target for my Mac stuff.
Powerplant...oh my...that brings back distance memories

toiletbrush
May 17, 2010

PIZZA.BAT posted:

in my new job i'm not really hands on keyboard anymore in my role which funny enough has made programming in general fun again. about a year ago i decided i was gonna teach myself ai by teaching a bot how to play poker. i thought it was gonna be like a month tops to get something that mostly worked but wouldn't really play well against players who actually knew what they were doing. it turned into a massive rabbit hole of research / using tools in ways they weren't designed because no one else has really done what i'm trying to accomplish before

i've finally- FINALLY- built out the tool framework such that i can now start testing hypothesis and actually, you know, start training models and loving around with things

my first hypothesis was just confirmed: the models would initially adopt a strategy which favors way overfolding as i deliberately set up the games such that they couldn't bleed out from the blinds. they would then very quickly abandon that strategy in subsequent generations for pretty obvious reasons. i accidentally confirmed a second hypothesis as well: that each subsequent generation would take gradually longer than the one before it because as the models converge it will take longer with each generation to find a meaningful difference between the models

i'm pretty excited!!
That's cool as hell.

Once you've got it working, train an NN to play cheat/liar/bullshit complete with AI generated facial expressions so it can bluff convincingly

toiletbrush
May 17, 2010
I'm writing a game which needs procedural dialogue, so I came up with a lovely grammar where you can define, e.g a person's name as "$firstName $secondName" and '$' basically means to look up another rule with that name, standard stuff. It's quick and dirty and works, but its crappy because (amongst other things) it's not typesafe and it's not refactorable - if I want to change the $firstname rule to $forename I gotta rely on copy and paste. I've also got hundreds of rules now and it's getting sort of unmanageable.

So, I just had a go at typesafe-ing it up, and came up with this:
code:
indirect enum Rule: CustomStringConvertible {
    case literal(String)
    case choose([Rule])
    case maybe(Rule)
    case concat([Rule])
    
    var result: String {
        switch self {
            case .choose(let options): return options.randomElement()!.result
            case .literal(let value): return value
            case .maybe(let production): return [true, false].randomElement()! ? production.result : ""
            case .concat(let values): return values.map { p in p.result }.joined()
        }
    }
    
    var description: String { result }
}

extension Rule: ExpressibleByStringInterpolation {
    init(stringInterpolation: RuleInterpolation) {
        self = .concat(stringInterpolation.value)
    }
    
    struct RuleInterpolation: StringInterpolationProtocol {
        var value: [Rule]
        
        init(literalCapacity: Int, interpolationCount: Int) {
            value = []
        }
        
        mutating func appendLiteral(_ literal: String) {
            self.value.append(.literal(literal))
        }
        
        mutating func appendInterpolation(_ value: Rule)
        {
            self.value.append(value)
        }
    }
}

extension Rule: ExpressibleByStringLiteral {
    init(stringLiteral: String) {
        self = .literal(stringLiteral)
    }
}

extension Rule: ExpressibleByArrayLiteral {
    init(arrayLiteral: Rule...) {
        self = .choose(Array(arrayLiteral))
    }
}
It kludgy and hacky and prob gross, but the grammar becomes typesafe and refactorable:
code:
let size: Rule = ["big", "small", "tiny", "huge"]
let bodyPart: Rule = ["rear end", "butt"]

let thing: Rule = "Nice \(size) \(.maybe("flabby")) \(bodyPart), my \(["dude", "friend"])"
...which prints out...
Nice small rear end, my dude
Nice big flabby butt, my friend
Nice small flabby rear end, my dude
Nice small rear end, my friend
etc etc

I'm dead pleased with myself

toiletbrush
May 17, 2010
is the neural net playing 'itself' all the time, or does it have a population of more or less successful nets to play against? I'm missing a bunch of back story so apologies if this is a dumb question

toiletbrush
May 17, 2010

PIZZA.BAT posted:

nah not a dumb question i haven't posted about this much. i generate a pool of essentially random neural nets and they play against each other. each 'round' goes for 300 hands where at the end i take their stacks and add it to their total winnings. i then shuffle the players, distribute them to tables, and have them play again. i do this over and over until i see their overall ranks stabilize as the better players bubble up to the top and worse players sink. once they've stabilized i take the players who are a standard deviation above the rest, clone them, and use them to spawn the next generation. after every hundred (or thousand, ten thousand, idk i'm still working on this) generations i take the current winners and have them play against a random distribution of older generations to ensure that they are in fact still getting better. once i can't see any difference then the model has advanced as far as it's going to go and that's that

that's the dumb implementation that i'm currently working on, at least. the better algorithm will also be a random pool that plays against itself but will also be much more intelligent on how it evolves and comes with an added bonus of knowing when it has finished converging without having to play against older generations, although i'll still leave that part in just to make sure it doesn't get itself into a some sort of rock/paper/scissor infinite loop
oh right nice, I was wondering if it was over-fitting playing itself. what you've got is way better, but imho it still sounds more like its over-fitting to the winning population rather than a learning/mutation rate problem.

you might want to have a significant part of *every* generation's population be unrelated to the winning group from the previous generation, rather than every 100/1000/10000 or whatever, maybe plucked from older generations and possibly even a few newly spawned nets as well to kick things around a bit.

toiletbrush
May 17, 2010

echinopsis posted:

this reminds me of that video where the guy makes all the different chess AIs and make them play each other idk if you’ve seen it but it’s interesting and dude makes me laugh

can’t find it but the idea is that he wants to play AI chess but not good AI chess because he’s not that good so he’s kinda hunting for a middle ground skill AI and trying out all sorts of different models etc
this one? its awesome
https://www.youtube.com/watch?v=DpXy041BIlA

toiletbrush
May 17, 2010
That sounds like something based on a big assumption but I've done nothing with neural nets for like two decades so I probably have no idea what I'm talking about. How do you 'breed' the neural nets? Combining NNs with genetic algorithms is something that I've been thinking about for ages, but I don't get how the breeding bit works - surely if each successful NN has a differnent 'model' for what works, then randomly combining two successful NNs you just end up with gibberish?

Own idiot spare time project - have spent literally 6 weeks (on and off, obvs) trying to figure out why my iOS photography app only samples at 30fps, rather than 240fps as configured. Turns out it was an ordering issue, that is *not documented anywhere* in Apple's docs, and is totally counterintuitive. Thanks apple. Thapple.

toiletbrush
May 17, 2010

PIZZA.BAT posted:

yeah it’s :airquote: evolution it doesn’t actually involve breeding at all. basically the best performers in a generation are cloned to prevent regression and then they’re copied with slight modifications to their weights to fill up the rest of the population. if some of those wiggles results in better performance then they get to be cloned into the next generation, if not then they’re dumpstered
hah awesome, thats exactly how I was going to do my stock market NN so I didn't have to learn how back propagation worked

I guess back prop is exactly this but just done analytically rather than randomly

toiletbrush
May 17, 2010
yeah that's another way in which a wide population of opponents might help, as the NN can't assume the other players will always behave in a certain way. I've got another idiot spare time project where I've got a similar issue - its just a board game, but there are a bunch of dependencies that mean that decision tree searches won't work. I could try using an NN but I've got no idea how to map the board to inputs, and I really don't want to have to learn back prop (and I don't want to cheat and use an existing ML framework either)

it feels like modern ML often just turns a programming/algorithmic problem into a training problem, that isn't really any easier to solve. Plus rather than crashes with stack traces you get unwanted behaviour guided by an intractable black box of equations with no mapping to any sort of decision process a person could understand.

toiletbrush
May 17, 2010

eschaton posted:

I’ve been hacking on and off on an HP1000 A-series emulator in Swift and have developed a nice pattern for how to implement it using value-carrying enums to represent the instructions
yeah that's a really nice pattern, the enum shorthand means you get a really nice 'dsl' for writing asm for test etc as well. drat that's a lot of instructions though, writing a CHIP-8 emulator was about my limit.

have you tried breaking your decode Word into bytes/nibbles so you can do your decode as a switch?

quote:

using this pattern, with optimization essentially set to -1 (via testability) in a unit test, I get just over 1 MIPS on my top-tier M1 Max
noice

toiletbrush
May 17, 2010
I've been working through the Cryptopals challenges, gotten as far as Set 1 Challenge 7, AES in ECB mode.

It's taken a few hours and a bunch of sources - wikipedia, which is correct but incomprehensible for the most part, a guide which is simple but misses out the detail, and a pdf that gives the state matrix after each step but is misleading and also has a couple of typos in the expected state - but I've now got a working AES implementation, for encryption at least. The code is dead simple and most of the concepts and transforms are too, but truly understanding some parts of the math (the mix-columns step, mainly) is a bit more complicated and I certainly don't get it yet.

Also I can't find a step-by-step state example for decryption, so I'm just going to have to figure it out myself. I'm assuming its just 'do the inverse of each operation in reverse order', and most operations are easy to inverse, but I've got no idea how to do the inverse of 'mix columns'.

toiletbrush
May 17, 2010
^^^ yeah thats dead useful, I just didn't quite get the maths of the Galois stuff and didn't want to just copy paste a bunch of stuff. all makes sense now though, I think, and decryption works!

if you do the cryptopals challenges it's well worth doing AES yourself if you can put an evening or two into it. Only reason it took me so long was that I wasted literally an hour trying to figure out why my expanded keys didn't match the reference output (the reference never mentioned that 'add' means 'xor' and the reference output had a typo!)

toiletbrush fucked around with this message at 18:17 on Jul 31, 2022

toiletbrush
May 17, 2010
decided to write a 2d path tracer so I can simulate lenses and diffraction and stuff


doesn't look like much but took hours because I had a dumb mostly invisible typo in my dot-product :argh:

only simulates perfect mirror reflections right now but does area lights. ill work on diffuse materials next.

toiletbrush
May 17, 2010

echinopsis posted:



no reason to think you won’t be replicating something like this only better soon.

the 2d aspect intrigues me
hell yes, this is exactly the sort of thing I'm aiming for

it's 2d in the sense that nothing has a z component, its all just triangles and rectangles (and curves eventually) on the 2d plane. given everything that isn't a boundary of a shape is empty space, everything should be black except those boundaries, but obvs that would be really boring, so instead each light ray traces out it's path in the space in-between the volumes. It's not technically accurate but it means you can visualise the light paths, and hopefully make pretty pictures, which is the main point!

The good thing is I can 100% rely on forward ray-tracing, it's dead easy to multithread, and also having limits on the number of bounces won't affect the image so much. the downside is I have to render every ray path, so if anyone knows any super fast anti-aliased/subpixel line drawing routines pls let me know.

toiletbrush fucked around with this message at 21:00 on Dec 14, 2022

toiletbrush
May 17, 2010

echinopsis posted:

isn’t this fundamental to how path tracing works though? I usually use at least 1000 samples per pixel in my images.

does your engine work out a single ray (between bounces for example) and then draw that ray as a line to the image bigger?


i’m just thinking about loud sorry. no ide how it’s really working internally
in 3d you're doing a bunch of samples for each pixel, and for each sample you send out a ray and follow x number of bounces, possibly recursively, but after all that you're just updating 1 pixel in the accumulation buffer.

my 2d one is forward, so I send out a light ray from a randomly chosen light and then track it for x bounces (or until there's no light left to bounce), but for each bounce I have to update every pixel in the accumulation buffer that the light travels through.

3d path tracing spends 99.999% of cpu time tracing light and 0.00001% updating buffers. mine's more like 1% tracing light and 99% updating the accumulation buffer, which is why I need a fast as heck line drawing algorithm.

Here's a render with 100 rays (vs 1,000,000 in the pic above) which makes how it works more obvious...

(both lights are area lights so the rays start in random places inside a circle)

toiletbrush fucked around with this message at 22:21 on Dec 14, 2022

toiletbrush
May 17, 2010
spent a couple of hours getting refraction and basic diffuse transmission/reflection working, refraction isn't based on wavelength though.


toiletbrush fucked around with this message at 17:32 on Dec 16, 2022

toiletbrush
May 17, 2010

echinopsis posted:

what have you coded this in
The best language, swift. I’ll probably put it on GitHub at some point

toiletbrush
May 17, 2010
holy crap that's an amazing haul, good job

Got a hacked version of refraction working...

it's really obvious I need to actually read up about how light works cos the way I'm doing it isn't right at all. Right now I'm basically assuming hue = wavelength and using it as a modifier for the IOR, but hue and wavelength don't really map to each other. This is why both ends of the rainbows are bright purple rather than dying out to black at the red and blue ends like they should (I think?)

So instead I'm going to have to model lights as actually outputting a spectrum of colour, which is actually really easy, the tricky bit is going to be accumulating light of different wavelengths in a way that sort of matches perception. I'll leave that for later. The upside is it's easy to model surfaces the same way, have wavelength-specific scattering and absorption and accurately model lights of different types.

also seeing some artefacts, like the bands on the right edge of the prism, not really sure what that's down to

toiletbrush
May 17, 2010
last post until it does something more interesting, but it's modelling light more accurately now, no more RGB hacks, just wavelengths :cool:

It's all a bit exaggerated for the pic above, but it can be realistic if you need it to be. Also figured out what all those glitches are about, not hard to fix but will wreck performance a bit, although multithreading will more than cover the difference. Hopefully it'll be fast enough to generate decent images at interactive speeds, cos playing around with this stuff in a gui would be nuts.

For reference, this takes about a 80th of a second on a single core

toiletbrush
May 17, 2010

Internet Janitor posted:

the premise of Crime Committer reminds me most of BEERHUNT, a ti83+ game that made the rounds from time to time. in some ways it's like a more chaotic dopewars: you buy and sell booze and booze accessories and periodically throw keggers. buy enough supplies and set your prices right and you can make a killing, but when poo poo goes off the rails you'll have to cheese it before the cops catch you and confiscate everything

if you expand the scope you'll gradually start to have overlap with liberal crime squad, too

edit: i made a dopewars-like game in early 2020 called business is contagious which has aged in interesting ways since its release: https://internet-janitor.itch.io/business-is-contagious
I started writing a trading game around the same time as echi and Crime Comitter, based on Minder for the speccy, after seeing One Credit Classics playing it a bunch. You basically travel round pubs buying potentially stolen/damaged gear and trying to sell it on for profit, avoiding/bribing cops and paying off debts etc.

I wanted everything to be procedural, so each game has a randomly generated 'town', a basic economy going on, and I also wrote an absolute poo poo ton of production rules for stuff like pub names, cockney nicknames, stuff to sell and buy and general conversation. I'm dead pleased with all of it, especially the language which can be really funny.

Trouble is, this last weekend I sort of realised that the game just isn't...fun. I'm trying to figure out what I've got wrong compared to other games in the genre, but not figured it out yet.

toiletbrush
May 17, 2010
That's part of the problem...every other game ive made ive played it all the time ive been building it and enjoyed them, but this one has taken so long to get to a point where the gameplay loop is complete that I've not really played it at all...and now its all working, its no fun :(

toiletbrush
May 17, 2010
a few months ago I posted that I gave up with writing a Minder clone because it wasn't actually any fun to play. So since then I've been working on a Laser Squad clone instead, cos I love games like that. The game loop is in, there's a decent selection of weapons/enemies/perks/utilities etc, AI is pretty solid etc, it's mostly just tarting it up now.

This weekend I added a 'tactical map' scene - you conquer and reveal squares by doing missions, which get harder as you go south, with the goal to get to the bottom to kill the final boss, or something like that. reason I'm posting is I added proc-gen maps to the background, and I really like how it looks, even tho its possibly a bit busy...


It has no effect on the game whatsoever - it's just there for 'flavour', but I still love it. Also I was able to use the text generator from the Minder clone to proc-gen mission descriptions and names etc, so it wasn't a total waste!

Adbot
ADBOT LOVES YOU

toiletbrush
May 17, 2010
thanks for positive feedback :hfive:

Zamujasa posted:

my only suggestion would be some kind of border between the top three options to make it look a bit less busy and a bit more cleanly delineated but otherwise it looks p good
yeh, a lot of the between-missions ui needs work, but its the least interesting bit to dev and its probably going to take the longest and be the hardest. I'm leaving it to last.

For now I'm trying to figure out way for the mission outcomes to feed back into the map, or have the map affect missions, e.g if the square is in the mountains then you get a mountain tiles set, or a swamp tile set if its swampy, or affect the mission description.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply