Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dick Trauma
Nov 30, 2007

God damn it, you've got to be kind.
I'm old enough that I honestly wouldn't know that those names are all fake.

JACKED LIKE A MAN

Adbot
ADBOT LOVES YOU

mds2
Apr 8, 2004


Australia: 131114
Canada: 18662773553
Germany: 08001810771
India: 8888817666
Japan: 810352869090
Russia: 0078202577577
UK: 08457909090
US: 1-800-273-8255
Cumpo is playing downtown this weekend.

Queen_Combat
Jan 15, 2011
I don't get it.

Hempuli
Nov 16, 2011



Metal Geir Skogul posted:

I don't get it.

The image is from this tweet:

https://twitter.com/botnikstudios/status/955870327652970496

i.e. more predictive keyboard shenanigans! :)

E: Or just neural network shenanigans? I thought Botnik Studios only did predictive text generation, huh.

Tunicate
May 15, 2012

https://www.youtube.com/watch?v=r6zZPn-6dPY

A demonstration that a single neural network trained by GAN can generate a variety of images. All images in the same panel coordinate are generated from the same latent variable, and therefore its meaning is preserved over different classes.

Your Computer
Oct 3, 2008




Grimey Drawer

Tunicate posted:

https://www.youtube.com/watch?v=r6zZPn-6dPY

A demonstration that a single neural network trained by GAN can generate a variety of images. All images in the same panel coordinate are generated from the same latent variable, and therefore its meaning is preserved over different classes.

With the low resolution and how fast the images morph I didn't notice it at first, but I paused and uh.. the images are actually pretty disturbing :stare: (well at least the animal ones)

End of Shoelace
Apr 5, 2016
i would love that aesthetic for a horror game

7c Nickel
Apr 27, 2008
I've always thought this particular creature had achieved the best looking completely alien system of locomotion.

https://www.youtube.com/watch?v=AUXc6mckGLE

Your Computer
Oct 3, 2008




Grimey Drawer

End of Shoelace posted:

i would love that aesthetic for a horror game

I mean, Neural Networks have nailed the uncanny valley effect. It's so close but yet so wrong.

Phlegmish
Jul 2, 2011




I could totally believe most of these

Mindless
Dec 7, 2001

WANTED: INFO on Mindless. Anything! Everything! Send to
Pillbug
Baddwurds is a genius self-contained joke; a perfect band name.

Beachfeel is my favorite surf-themed chillwave artist

I Love You, the Wait must be a Killing Joke cover band

Mindless has a new favorite as of 15:41 on Jan 26, 2018

Phy
Jun 27, 2008



Fun Shoe
Jonathan Mushboy. Scenemy. Goof Alibi. MANACE.

Kiebland
Feb 22, 2012
I would totally see Benus Jackson.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



Boy/Boys play Pet Shop Boys in the style of Boy George and vice versa.

Graviija
Apr 26, 2008

Implied, Lisa...or implode?
College Slice
Dave Dump McMan is an old favorite.

Does it just show how immature I am that this procedurally generated text stuff (Harry Potter chapter, those Seinfeld and X-Files scripts, the recipes) are the funniest things in the world to me? I'm just laughing my rear end, unendingly.

I guess this would also explain why I like so many of Clickhole's "random insanity" articles.

Graviija has a new favorite as of 22:06 on Jan 26, 2018

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



I think "random" humor is super hard to do well, but you can use "tricks" like having an NN or Markov generator spit weird poo poo at you, and it'll manage to pass the lame bar. Like imagine that Coachella roster written by hand, it would be incredibly try-hard and have very few actually funny band names.

I called it constrained writing earlier in the thread, which I still think it is (they're very often heavily curated), but they do hit the funny bone more often than not.

& Yeah, Woebin: You're right that it's not 100% handmade.

Guy Mann
Mar 28, 2016

by Lowtax
Procedurally-generated "Um, actually..." posts.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



There's some great ones where they combine two corpora, such as the erowid recruiter (drug trip reports + recruiter emails):

https://twitter.com/erowidrecruiter/status/560559080289222656

https://twitter.com/erowidrecruiter/status/947343560847831040

https://twitter.com/erowidrecruiter/status/841360717433434112

Probably curated as well, but they're excellent.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
IRC bots can and will be brutal.

https://twitter.com/Tobyslop/status/687002587971858432

Hempuli
Nov 16, 2011



Especially with markov chains curation is pretty much impossible not to do because so much of the generated content is complete nonsense and the actually funny bits are the little gems hidden in the noise. I'm actually not sure what the signal-to-noise ratio is with well-trained neural networks, I'd like to imagine that they create good stuff every time but the truth is probably less fantastic.

I don't personally mind cherry-picking entries but at the same time I've got to admit that knowledge that an entry is heavily edited does diminish the funniness. Like Krankenstyle said, it's just really hard to do absurd humour well and procedural generation kinda bypasses the "this is trying too hard" problem entirely.

And yeah, combining two source texts can result in amazing things, like those dinosaur plants posted earlier.

Some markov chained stuff from my twitter "bots":
https://twitter.com/chaingenerator/status/953201759090077696

https://twitter.com/MtGmarkov/status/945379488644427776

7c Nickel
Apr 27, 2008
https://twitter.com/deepdrumpf/status/728317897412579328

https://twitter.com/DeepDrumpf/status/788918639810478080

7c Nickel has a new favorite as of 08:15 on Jan 27, 2018

End of Shoelace
Apr 5, 2016

Your Computer posted:

With the low resolution and how fast the images morph I didn't notice it at first, but I paused and uh.. the images are actually pretty disturbing :stare: (well at least the animal ones)



more procedural generation, but this time its fake celebrities!

https://www.youtube.com/watch?v=XOxxPcy5Gr4

heres a fun game: play this video at 2x speed and pause anywhere; see what kind of faces you get

https://www.youtube.com/watch?v=f8xSD4HO_8k&t=178s

here's a slower video if you want to watch the morphing in action

https://www.youtube.com/watch?v=36lE9tV9vm0&t=297s

Guy Mann
Mar 28, 2016

by Lowtax
The podcast The F-Plus, which is usually dedicated to reading weird poo poo they find on the internet like a modern version of the classic Awful Link of the Day, did their latest episode about reading scripts generated by Botnik. Harry Potter and the Portrait of What Looked Like a Large Pile of Ash is pretty good but I think the procedurally-generated West Wing episode is my favorite. "But that's not going to be something the American president of America will sign in America! This is boring to me. Donna! Donna. Donna. Donna? Donna?! Donna, help Donna help Donna, help Donna."

https://thefpl.us/episode/274

Your Computer
Oct 3, 2008




Grimey Drawer

End of Shoelace posted:

heres a fun game: play this video at 2x speed and pause anywhere; see what kind of faces you get

oh you know, just typical celeb photos

The Glumslinger
Sep 24, 2008

Coach Nagy, you want me to throw to WHAT side of the field?


Hair Elf

Your Computer posted:

oh you know, just typical celeb photos



Elton John?

DACK FAYDEN
Feb 25, 2013

Bear Witness
Steven Hawking on a bad hair day?

Kiebland
Feb 22, 2012
Phil Spector on a relatively GOOD hair day?

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
John Oliver?

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop

This was an incredible idea, my GF and I have been reading this all day and lolling and some of the recipes are delicious as well

RBA Starblade
Apr 28, 2008

Going Home.

Games Idiot Court Jester


State of the Union lookin good

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
Oh, content: Mandelbulb videos (renders of the mandelbrot set fractal, except a 3D version)



The best ones involve both zooming through it with the camera at different scales, while simultaneously tuning the parameters that generate the whole fractal to disturb the surface's location (the "level set").

Like below:

https://www.youtube.com/watch?v=Yb5MRbgNKSk
"The Intricacies of Mechanoid Eyeballs HD"

https://www.youtube.com/watch?v=jYsbFreUMkg

Even though there's a human involved, the shapes are all procedurally generated since the person tweaking the parameters really has no idea what the result is going to look like until they try it.

Happy Thread has a new favorite as of 22:12 on Jan 30, 2018

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
All the way back in the 1970's, the project TALESPIN was coming up with Aesop's-fable style moral tales about woodland creatures. The creator wanted to curate only the sensible stories out of his project, but the best ones were actually the "mis-spun" tales resulting from unspoken common knowledge not being inferred.

Read the bolded parts below for the best mis-spun stories:

quote:

Mis—spun Tales

One of the best ways to see why all the above components are necessary to a story generator is to see how we learned that they were necessary. It is not always obvious how a computer program will actually function while it is still in the planning stages. Important parts of a program are often left out because there was no way to know that they would be needed.

TALE-SPIN, in its early stages, frequently told rather strange stories. These “mistakes” caused many re-definitions in the original program. Since this process of “mistakes” followed by new theory is characteristic of AI programs in general it is worthwhile to look at these “mistakes” and consider what had to be done to fix them. (The output of the original stories has been simplified for ease of reading.)

***** 1 ******
One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe threatened to hit Irving if he didn’t tell him where some honey was.

Joe has not understood that Irving really has answered his question, albeit indirectly. Lesson: answers to questions can take more than one form. You’ve got to know about beehives in order to understand that the answer is acceptable.

*** 2 ***
One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe walked to the oak tree. He ate the beehive.

Increasing the range of acceptable answers is not enough. You have to know what the answers really mean.

*** 3 ***
In the early days of TALE-SPIN, all the action focused on a single character. Other characters could respond only in very limited ways, as in answering direct questions, for example. There was no concept of one character “noticing” what another character had done. Hence the following story, which was an attempt to produce “The Ant and the Dove”, one of the Aesop fables:

Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. He was unable to call for help. He drowned.

That wasn’t supposed to happen. Falling into the river was deliberately introduced to cause the central “problem” of the story. Had Henry been able to call to Bill for help, Bill would have saved him, but I had just added the rule that being in water prevents speech, which seemed reasonable. Since Bill was not asked a direct question, he didn’t notice his friend drowning in the river. “Noticing” is now an inference from change of location, so Bill sees Henry in the river, deduces that Henry’s in danger, and rescues him.

*** 4 ***
Here are some rules that were in TALE-SPIN when the next horror occurred. If A moves B to location C, we can infer not only that B is in location C, but that A is also. If you’re in a river, you want to get out, because you’ll drown if you don’t. If you have legs you might be able to swim out. With wings, you might be able to fly away. With friends, you can ask for help. These sound reasonable. However, when I presented “X fell” as “gravity moved X,” I got this story:

Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.

Poor gravity had neither legs, wings, nor friends. Now “X fell” is represented with PROPEL, not PTRANS, that is, as “the force gravity applied to X,” and the inference from PROPEL are not the same as for PTRANS.

*** 5 ***
The inclusion of awareness meant that I couldn’t set up the stories that way I used to.

Once upon a time there was a dishonest fox and a vain crow. One day the crow was sitting in his tree, holding a piece of cheese in his mouth. He noticed that he was holding the piece of cheese. He became hungry, and swallowed the cheese. The fox walked over to the crow. The end.

That was supposed to have been “The Fox and the Crow”, of course. The fox was going to trick the crow out of the cheese, but when he got there, there was no cheese. I fixed this by adding the assertion that the crow had eaten recently, so that even when he noticed the cheese, he didn’t become hungry.

*** 6 ***
Before there was much concern in the program about goals, I got this story:

Joe Bear was hungry. He asked Irving Bird where Some honey was. Irving refused to tell him, so Joe offered to bring him a worm if he’d tell him where some honey was. Irving agreed. But Joe didn’t know where any worms were, so he asked Irving, who refused to say. So Joe offered to bring him a worm if he’d tell him where a worm was. Irving agreed. But Joe didn’t know where any worms were, so he asked Irving, who refused to say. So Joe offered to bring him a worm if he’d tell him where a worm was. . .

Lesson: don’t give a character a goal if he or she already has it. Try something else. If there isn’t anything else, then that goal can’t be achieved.

Poor gravity :smith:

Happy Thread has a new favorite as of 22:06 on Jan 30, 2018

Ariong
Jun 25, 2012



I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any.

Hempuli
Nov 16, 2011



"Poor gravity had neither legs, wings, nor friends."

Ariong posted:

I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any.

I could find this article: https://dl.acm.org/citation.cfm?id=1624452
Looks like it's readable here: https://www.ijcai.org/Proceedings/77-1/Papers/013.pdf
The article is probably automatically scanlated from print, though, so the text has gems like this:

"GLGRGB WAS VERY THIRSTY . GEORGE
WANTED TO GET NEAR SOME wATER. GEURG E
WALKED FROM HI S PATCH OF GROUND ACROSS
THE MEADOW ThKOUGH THE VALLEY TO A RIVER
BANK. "

So it seems like a real thing, from mid-seventies? Very interesting!

Hempuli has a new favorite as of 00:21 on Jan 31, 2018

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop

Ariong posted:

I don't understand how that is possible using 1970's computing technology. I don't believe it. Do you have any more information on this project? I can't find any.

We put a man on the moon with 1960's computing technology. This is just some string manipulation :shobon:

Getting logic-based AI to work is largely the same now as it always was. Classic AI was not data-driven so you didn't need terabytes of training data. The big AI winter happened as early as 1984, *after* the big hype about neural networks for solving problems died down.

That winter slowly ended as people realized that there are still all sorts of applications for even the most limited AI (such as the dumber "big data" statistics based ones that are popular today, and that are hungry for as much high-speed input from the internet as they can process). That sort of AI was succeeding in an increasing amount of niche areas, towards ubiquity, and now there's the whole internet full of new opportunities to use it and show it off. There was nowhere for pretty generated images and Gaston lyrics to go where they'd have been quite as appreciated in the 1970's, versus now with twitter. The big difference that you see today is not that there's some giant body of new research everyone knows all about, or some massive code library that took decades for researchers to build up that you now can't build a product without, or even modern computer speeds -- mostly it's the suddenly increased domain of problems being tried.

Also if you want more information about TALESPIN in particular, you can just Google any of the excerpts I quoted above to get full articles. Mostly old ones, so .pdfs without highlightable text.

Happy Thread has a new favorite as of 05:01 on Jan 31, 2018

Guy Mann
Mar 28, 2016

by Lowtax
It's easy to forget that ELIZA was created in the mid-60s. One of the best arguments against the transhumanist singularity dorks is the simple fact that things like AI and speech recognition have been stagnant for decades even with the exponential growth of computing power and memory.

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop

Guy Mann posted:

It's easy to forget that ELIZA was created in the mid-60s. One of the best arguments against the transhumanist singularity dorks is the simple fact that things like AI and speech recognition have been stagnant for decades even with the exponential growth of computing power and memory.

This article came out towards the end of the 80's AI winter:

"Elephants don't play chess"

The title of this famous article by Rodney Brooks comes from the fact that in a research setting most AI agents lived out their robotic lives inside of some abstract puzzle world like chess or checkers, nothing at all like the natural physical world that natural brains evolved to deal with. It's impossible to understand our instincts without the context where they came from. Elephants are considered smart but they don't do anything like playing chess.

It highlighted what I think is the reason for that stagnancy in AI you mentioned as persisting today - there is a difference between intelligence (problem solving) and minds (people). Researchers mostly only try to create the former, because it's far more profitable to do problem solving super well and sell it to industry. There is very little push for actually making a mind. I've seen researchers who do focus on it, but there aren't many. They focus on the more reasonable problem of simulating animals, ecosystems, and nature instead of directly going for the grand challenge of language-capable human minds. They know the reality that we are nowhere near ready to even simulate lower animals yet, even ants, from a robotics control standpoint or social reasoning or otherwise.

So where to begin towards that?

Right now if we want to test an AI in the natural world instead of a chessboard we have to use a robot. That sucks for a variety of reasons. The robot costs a million dollars and has a poor understanding of self-preservation, and will happily shear its own arms off or throw itself down stairs if it misunderstands a goal, and the very first time that happens you're out a million dollars. You also can't afford to have an ecosystem of robots, or better yet a gene pool of them, swarming around by the millions, trying things out, living and dying and letting natural selection work out the best form of intelligence over generations. We simply do not have time to wait for that just to run a single experiment. Lastly, even with perfect robots, we have very limited ability to train them because we'd have to sculpt the world around them just so, with earthmovers and construction engineers and elaborate movie set artists and then tear it all down when it's time to tweak the scenario.

It's much better to try out AIs in a virtual setting that resembles the natural world. But to this day, we don't really have that as an option at all. You might be thinking of beautiful and interactive video game worlds that are full of AIs, but for the most part those worlds are pretty limited too -- even as a player you usually can't do things like burrow a hole through the ground, a stone, or any individual polygon and re-shape it and re-work it for something else like our ancestors learned to do. You can't rip up the shirt you're wearing and use it to plug a leak. You can't re-purpose whatever you find in games, so neither can an AI.

Minecraft is a game where I thought they would finally break this barrier since you can reorganize volumes of material around freely, which in turn should affect the AI's goals such as pathfinding and eventually have the AIs building structures and art. But the AI in that game is totally limited to just pathfinding and nothing else, and not even pathfinding that includes planning for future possibilities like moving material around to build a bridge or remove a wall - the AIs are simply forbidden from using the game's main mechanic! Only the human players are allowed to place blocks -- instead of leaving some "safe zones" where bots cannot touch your work, they just can't do anything anywhere. Only a few modders of Minecraft decided to try out AI-on-environment interaction (one mod procedurally generated novel cities that reflect the needs of the organisms who built them), or AI-on-AI interaction (which could have generated culture and competition, to further gives those cities meaning). Due to Minecraft being closed source those mods were lost to obscurity when the game updated. Due to inherent round-off error the silly blocks game can never simulate physically realistic things like rotating motions. But, for still being the game that brought procedural generation into the mainstream it's surprising the procedural cities idea did not take off.

I personally switched from majoring in AI to majoring in Computer Graphics as I found out how crucial physics simulation was going to be towards AI's emergence. Most of my labmates took an additional step outside of CS entirely to the Applied Math department, once they realized most of today's physically simulated virtual reality stuff was actually happening over there. We don't have the tools yet to pursue AI until we can simulate the world a little more faithfully and flexibly. Most AI researchers are not even interested in making a mind, and for every AI university class there's a completely different umbrella of what they consider to be relevant to the topic, to the point where the course title "AI" has lost all meaning.

Is it about advanced search trees? First order logic? Bayesian statistics? Language and animal reasoning? Tracking faces in videos? Simulating springy surfaces with iterative methods to elastically "snap" a smart selection tool? Evolving a gene pool to find solutions to TSP? Looking at a planner in PROLOG? Making a particle swarm to solve the scheduling problem? Using layers of autoencoders and then backpropagation / deep learning to try to blend images together?

I have personally seen every single one of these topics squeezed into AI curriculums and barely a single one has anything to do with making a mind. More of them have to do with procedural generation, but that's a broader topic and easier threshold to pass.

Happy Thread has a new favorite as of 06:47 on Jan 31, 2018

Tunicate
May 15, 2012

From the neuroscience side, it's funny to see computer scientists rediscovering poo poo that's in the cortex.

Lobotomy Bob
Jun 13, 2003

Noblesse Obliged posted:

Hello
Smithers
You
Are
Really
Good
At
Turning
Me
On

You should probably ignore that.

Adbot
ADBOT LOVES YOU

Ariong
Jun 25, 2012



Dumb Lowtax posted:

We put a man on the moon with 1960's computing technology. This is just some string manipulation :shobon:

Getting logic-based AI to work is largely the same now as it always was. Classic AI was not data-driven so you didn't need terabytes of training data. The big AI winter happened as early as 1984, *after* the big hype about neural networks for solving problems died down.

That winter slowly ended as people realized that there are still all sorts of applications for even the most limited AI (such as the dumber "big data" statistics based ones that are popular today, and that are hungry for as much high-speed input from the internet as they can process). That sort of AI was succeeding in an increasing amount of niche areas, towards ubiquity, and now there's the whole internet full of new opportunities to use it and show it off. There was nowhere for pretty generated images and Gaston lyrics to go where they'd have been quite as appreciated in the 1970's, versus now with twitter. The big difference that you see today is not that there's some giant body of new research everyone knows all about, or some massive code library that took decades for researchers to build up that you now can't build a product without, or even modern computer speeds -- mostly it's the suddenly increased domain of problems being tried.

Also if you want more information about TALESPIN in particular, you can just Google any of the excerpts I quoted above to get full articles. Mostly old ones, so .pdfs without highlightable text.

The fact that this is real is crazy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply