Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PIZZA.BAT
Nov 12, 2016


:cheers:


finally said 'gently caress it' and impulse bought an arduino starter kit. i've been wanting to get into making reactive festival stuff forever and since i'm being sent to the frigid north for work for the winter may as well do something constructive

Adbot
ADBOT LOVES YOU

PIZZA.BAT
Nov 12, 2016


:cheers:


Sagebrush posted:

Clothes covered in LEDs to wear at burning man

yup!

PIZZA.BAT
Nov 12, 2016


:cheers:


go play outside Skyler posted:

https://charar8.com

basically enter any string and like it or dislike it.

our goal is global world domination by 2021

built in 1.5h with react

someone beat me to yospos, i see

PIZZA.BAT
Nov 12, 2016


:cheers:


https://twitter.com/dauragon/status/1216397753481334784?s=21

funny this tweet should show up right after i’ve turned 32 and am looking into building my own pinball table

PIZZA.BAT
Nov 12, 2016


:cheers:


Jonny 290 posted:










I bought a defunct power supply company's entire parts stock today for $0.01 to $0.02 on the dollar - not sure yet until I inventory.

literally a couple million components.

$200

jonny i finally got around to setting up a work bench in my basement so i can fiddle with electricity. now i need to start hoarding components and this post is very inspirational ty

PIZZA.BAT
Nov 12, 2016


:cheers:


Sagebrush posted:

it was incredibly hard to find this tweet. thanks google for helpfully searching for "related terms" and returning 10,000 pages of asimov stuff even when i explicitly tell you i do not want them. ffs

https://twitter.com/Edcrab_/status/714959984229031936

this tweet cracks me up any time i think about it so thank you for posting it

PIZZA.BAT
Nov 12, 2016


:cheers:


https://twitter.com/nvuono/status/1219828601853444097?s=21

PIZZA.BAT
Nov 12, 2016


:cheers:


i don't think it'd be too difficult to just make it yourself using a couple videos on youtube as a reference, would it?

PIZZA.BAT
Nov 12, 2016


:cheers:


lol nice

PIZZA.BAT
Nov 12, 2016


:cheers:


Pile Of Garbage posted:

https://twitter.com/GarbageDotNet/status/1250850882922508288

thinking of modifying it so that if the user accepts the costanza disclaimer they get redirected to goatse

art

PIZZA.BAT
Nov 12, 2016


:cheers:


orly posted:

did another DOOM inspired composition, arranged the fft height field in polar coords

https://www.youtube.com/watch?v=mqmW32KV97s

neat

PIZZA.BAT
Nov 12, 2016


:cheers:



i know what a few of these words mean

PIZZA.BAT
Nov 12, 2016


:cheers:


nothing makes you appreciate modern toolsets more than forcing yourself back a decade or two

PIZZA.BAT
Nov 12, 2016


:cheers:


MononcQc posted:

I started running a small script that scrapes the globe and mail's daily horoscopes and then uses idiot regexes to turn them into tech horoscopes. It works surprisingly well: https://ferd.ca/horoscope.html

sharing this in the company slack, thank

PIZZA.BAT
Nov 12, 2016


:cheers:


Doc Block posted:

LMFAO that’s $398.88


jesus loving christ

PIZZA.BAT
Nov 12, 2016


:cheers:


Sagebrush posted:

one of the only funny things lowtax ever did was ban some guy for browsing the forums from a webtv

lol yeah

PIZZA.BAT
Nov 12, 2016


:cheers:


i guess i'll ask here as well. i want to start diving into the world of ai and want to start with a primitive genetic algorithm which i'll teach to play different board/card games. my issue is i'm looking everywhere for libraries to help me generate the starting decision trees and almost all of them require you to pass in the fitness function in and let the library iterate generations. that won't work for me because #1 solving the game before i even start takes all the fun out of this and more importantly #2: gently caress that i'm way too lazy to do that

also many of these games are multi-player so i want to be able to simulate a couple thousand games running in parallel with thousands of candidates all playing against each other a couple hundred of times for each generation. also something that my very amateur research hasn't turned up anything on

anyone want to point me in the right direction?

PIZZA.BAT
Nov 12, 2016


:cheers:


yeah the problem with monte carlo is your ai will never deceive other players or learn to call out players trying to deceive it. i’m interested in playing around with games with information asymmetry like poker or settlers of catan

PIZZA.BAT
Nov 12, 2016


:cheers:


also because the games i want to play with have multiple factors involved with them that can be expressed by separate chromosomes that all contribute to the overall game strategy. ie: choosing your starting positions in catan, trading assets, and deciding what to build are all totally separate elements but need to be included in the whole package

PIZZA.BAT
Nov 12, 2016


:cheers:


idk i could also be a crazy person

PIZZA.BAT
Nov 12, 2016


:cheers:


trying to remember the last time i ran a channel scan jfc

PIZZA.BAT
Nov 12, 2016


:cheers:


spankmeister posted:

that won't work because the curtains will be too heavy and it'll pivot down. you've made a long lever.

he could pretty easily use a material as light as a sheet and still get his desired result. i doubt that would be enough to bend pvc pipes that wide

PIZZA.BAT
Nov 12, 2016


:cheers:


Trig Discipline posted:

i just passed a major career milestone and made a thing to commemorate

https://www.youtube.com/watch?v=dMeXcut4laY

:toot:

PIZZA.BAT
Nov 12, 2016


:cheers:


well i finally cancelled my code42 subscription. rest in piss. that program worked real well for a long time but man it really feels like they've been actively trying to get rid of all their consumer customers over the past two years

i have two weeks to get duplicati w/ backblaze set up

PIZZA.BAT
Nov 12, 2016


:cheers:


duplicati / backblaze was able to run a full backup in about 18 hours where crashplan was able to move from 72% to 75% backed up in the same timespan. it's definitely more finicky and not ready for widespread consumer use but anyone in yospos should be able to handle it easily. i recommend it

PIZZA.BAT
Nov 12, 2016


:cheers:


so i've got the java part of my idiot spare time project up and working and am now looking at the python/ai side of things and man.... this poo poo is complicated lol

i'm reading through tensorflow documentation and while nothing is completely beyond my grasp it's just every single thing i read opens up like a hydra of even more things i need to read. i'm eventually going to hit a point where i go, 'oh... ok now i get it i know what to do now' but it's probably going to be a while

edit: the most frustrating part is even in their tutorials they do the, 'ok now draw the rest of the owl' a whole lot. i understand that this is a brand new and complicated field where the only real contributors are academics at this point but it'd be nice if their tutorial code used real loving variable names instead of things like _

PIZZA.BAT fucked around with this message at 15:10 on Nov 9, 2021

PIZZA.BAT
Nov 12, 2016


:cheers:


pseudopresence posted:

Is _ not just the 'don't care' variable for return values you don't need? Are they using it in other contexts?

they were using it as an iterator that they pulled stuff out of and manipulated all the time

like.... please give me a meaningful name.... please

this is tutorial code for gently caress's sake!

PIZZA.BAT
Nov 12, 2016


:cheers:


animist posted:

strongly rec just cloning a project that does something close to what u want and then tweaking it, if possible. bringing up ML projects is hell, i always forget some tiny super important detail. then the model doesn't converge and it's undebuggable cause there's no way to debug why your gradients are wrong. if you use an existing codebase usually a lot of those critical learning rate tweaks or whatever are already baked in

the problem is no one has done* what i'm attempting so the only real way to go about it is learn what all these different components actually do and why, unfortunately. i'm attempting to teach a bot how to play poker. i spent a few months looking through the literature on other academic shops who all took swings at it and was always unhappy with their implementations because they'd miss fundamental aspects of high-level play. the next few months i built the simulator which was a bit of a pain in the rear end because, surprise, it turns out poker has a shitload of edge cases in it! after that i built a class that runs all the evaluation on the table/hand state which will create the massively simplified information i feed into the neural net so it doesn't have to spend who knows how many millions of generations to learn the difference between a low and high straight

the big problem is this is a mutli-actor game with not only hidden information but tons of scenarios where making the right play can punish you, wrong moves can reward you, and scenarios where a good situation can quickly turn bad as you see additional community cards. i'm pretty sure what i want is an actor-critic model, probably with a SAC agent because maximizing entropy is definitely something you want in a game like poker. the problem is everything i've read so far is making it seem like you have to explicitly define the expected reward function so the critic knows how to criticize but.... how am I supposed to do that if the whole point of this exercise is to let these models discover optimal choices on their own? every tutorial i've read just seems to gloss over this and it's making me think i'm in the totally wrong place

*there may be a handful of people who've pulled this off but i'm gonna guess they're raking in money in online casinos and keeping their mouths shut

PIZZA.BAT
Nov 12, 2016


:cheers:


yup. that’s one of things that inspired me to make this in the first place. the problem i had with their implementation is it relies heavily on monte carlo which for a game like poker is insane. with a heavily simplified decision space they were able to run it on *only* 1.5 petabytes of disk

my main objective is to create a model that largely acts on information professional level players act on and also evaluates hands similarly. humans don’t see the flop and go ‘ok here’s all the possible combinations of turn and river cards we can see along with all the actions the other players can take for each of those combinations so therefore my bet is X’

they go, ‘ok i have a flush draw, top pair with a strong kicker, and there’s no risk of a straight. my opponent has a loose opening range and also tends to chase. my bet is X’

PIZZA.BAT
Nov 12, 2016


:cheers:



adult swim bumper generator is an idea i've been kicking around in my head for like a decade. that's awesome. using wint tweets as source material is genius

PIZZA.BAT
Nov 12, 2016


:cheers:


Jonny 290 posted:

i also have a notion to figure out how to use http://www.taiganet.com/ to generate a minute of weather channel report and inject it in between each show

is there any reason you can't just run it in its own window and grab its image whenever you want? seems like that'd be the easiest option

PIZZA.BAT
Nov 12, 2016


:cheers:


PIZZA.BAT posted:

so i've got the java part of my idiot spare time project up and working and am now looking at the python/ai side of things and man.... this poo poo is complicated lol

i'm reading through tensorflow documentation and while nothing is completely beyond my grasp it's just every single thing i read opens up like a hydra of even more things i need to read. i'm eventually going to hit a point where i go, 'oh... ok now i get it i know what to do now' but it's probably going to be a while

edit: the most frustrating part is even in their tutorials they do the, 'ok now draw the rest of the owl' a whole lot. i understand that this is a brand new and complicated field where the only real contributors are academics at this point but it'd be nice if their tutorial code used real loving variable names instead of things like _

update: i finally have a minimal tensor set up that appears to be working. i've now also parallelized both the python and java sides of the equation so i'll be able to fully utilize the hardware i have while training instead of everything being choked onto a single core

i'm now working on a naive mutation algo just to sanity check that this is going to behave in the way i expect. if i'm able to see progress here i'll start putting in work on both making the fully fleshed out tensor along with the actual more complicated mutation algo and let 'er rip

one question for anyone who knows this stuff: i understand that tensor stuff is supposed to use the gpu a lot but what's actually *happening* on the gpu? in 99% of use cases it seems like you're supposed to be running it through a built-in training session which is optimized to run on the gpu but i'm not using any of that because the training/reward mechanism is more out there. if i'm only instantiating a tensor and pushing data into it / taking the results tons of times is that something that should be automatically being pushed over to the gpu? right now when i'm running my tests i'm seeing my cpu hit 100% and the gpu is remaining idle and i don't know if this is something i should be working on fixing

PIZZA.BAT
Nov 12, 2016


:cheers:


PIZZA.BAT posted:

update: i finally have a minimal tensor set up that appears to be working. i've now also parallelized both the python and java sides of the equation so i'll be able to fully utilize the hardware i have while training instead of everything being choked onto a single core

i'm now working on a naive mutation algo just to sanity check that this is going to behave in the way i expect. if i'm able to see progress here i'll start putting in work on both making the fully fleshed out tensor along with the actual more complicated mutation algo and let 'er rip

one question for anyone who knows this stuff: i understand that tensor stuff is supposed to use the gpu a lot but what's actually *happening* on the gpu? in 99% of use cases it seems like you're supposed to be running it through a built-in training session which is optimized to run on the gpu but i'm not using any of that because the training/reward mechanism is more out there. if i'm only instantiating a tensor and pushing data into it / taking the results tons of times is that something that should be automatically being pushed over to the gpu? right now when i'm running my tests i'm seeing my cpu hit 100% and the gpu is remaining idle and i don't know if this is something i should be working on fixing

another tensor question: does anyone know of a list somewhere that actually explains what each tensor actually *does* or i guess what kind of issues you should be using it for? all the documentation i find uses in depth lingo or sometimes commits the cardinal sin of using the defining word in the definition itself and it's like... come on man explain this to me like i'm an idiot

ImmantizeTensor : This tensor immantizes

gee thanks

PIZZA.BAT
Nov 12, 2016


:cheers:


the more i've talked about this stuff with people who've gotten graduate/phds from very high tier schools the more i'm convinced that no one actually knows how the gently caress any of this works and is just guessing all the time but doesn't want to admit it

PIZZA.BAT
Nov 12, 2016


:cheers:


echinopsis posted:

https://www.tiktok.com/embed/7056361498527943938

lmao I recorded it. I made some music. I sped up the video to match the music. The video and the music are utterly unrelated. Hopefully you can tell it gets bright.

this feels like a final fantasy boss fight is about to start and i mean that in the best way possible

PIZZA.BAT
Nov 12, 2016


:cheers:


i guess i should have posted this here:

PIZZA.BAT posted:

paging sagebrush or anyone else who's into 3d printing:

my brother in law is getting rid of some stuff and wants to know if i'm interested in an ender 3. as someone who's never done any 3d printing and would be an absolute beginner is this something that would be worth grabbing for the hell of it or will it be a giant pita?

PIZZA.BAT
Nov 12, 2016


:cheers:


Sagebrush posted:

How interested are you in doing 3D printing as a hobby?

i'm gonna say 'kind of interested'. i'm working on other projects now but when i wrap them up i could see the availability of a printer steering my next foray into something more physical, which could be neat. in the immediate future it'd definitely be collecting dust, though

PIZZA.BAT
Nov 12, 2016


:cheers:


PIZZA.BAT posted:

update: i finally have a minimal tensor set up that appears to be working. i've now also parallelized both the python and java sides of the equation so i'll be able to fully utilize the hardware i have while training instead of everything being choked onto a single core

i'm now working on a naive mutation algo just to sanity check that this is going to behave in the way i expect. if i'm able to see progress here i'll start putting in work on both making the fully fleshed out tensor along with the actual more complicated mutation algo and let 'er rip

one question for anyone who knows this stuff: i understand that tensor stuff is supposed to use the gpu a lot but what's actually *happening* on the gpu? in 99% of use cases it seems like you're supposed to be running it through a built-in training session which is optimized to run on the gpu but i'm not using any of that because the training/reward mechanism is more out there. if i'm only instantiating a tensor and pushing data into it / taking the results tons of times is that something that should be automatically being pushed over to the gpu? right now when i'm running my tests i'm seeing my cpu hit 100% and the gpu is remaining idle and i don't know if this is something i should be working on fixing

after spending the past month fine tuning the parallelization between the java/python bits, which is necessary because there's a LOT of data moving between the two, i noticed that the best i could push the machine to was ~30% cpu utilization due to the overhead involved in sending messages and all the i/o wait. i was starting to look into ways to speed up this process when a lightbulb suddenly turned on: you only use the python to *train* the model so you can get something you can export in some format. wait a second. these models are probably more broadly adopted by other languages because they have to be used in high-performance production environments. i'm not actually doing any training in the python so i could.... i could probably... god damnit

so a few quick google searches later i've found that the generally accepted industry standard is onnx and yes, there's a java library. god damnit. so the past two months of building these message brokers between the two systems, while not being completely worthless, were mostly useless. the only thing i need the python side to do is take the weights and generate an onnx model for the java machine to use. once the java machine has the model all those messages can happen internally which will result in a HUGE performance boost

fjeriuwoanvjkflfdnvbjiHGFRUIOPGNFREWUOIka;ruoi;

poo poo like this is why i'm convinced no one in the ai space knows what the gently caress they're doing. i've walked several ai people through my architecture and not one of them even hinted that i should move in this direction, but now that i'm here it's extremely obvious that this was the best solution all along. loving :bang:

PIZZA.BAT
Nov 12, 2016


:cheers:


GWBBQ posted:

rather than a field of research, think of ai as a text or spoken input, with the desired output being money from venture capitalists

100% agreed

Adbot
ADBOT LOVES YOU

PIZZA.BAT
Nov 12, 2016


:cheers:


i got everything cut over to being run entirely in the java engine and just finished running some benchmarks. over 5x increase in performance and my cpu was thoroughly pegged at 100% the entire time. gently caress yes. i have an old-rear end 6th gen intel that i'm about to upgrade to a 12th gen by the end of this week too so we'll see how much further it goes with the new hardware. AND THEN i'll start running them on the gpu to see if it goes even faster from there

:science:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply