Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Trabisnikof
Dec 24, 2005

echinopsis posted:

hmm thanks

my first result turned out absolute dogshit and it’s kinda burned me lol

yeah the style transfer stuff is really loving hard to get good results

i tried to make some 2020 food lines photos look like norman rockwell and these are probably my best results after generating 100+ (and learning about how to work around interactivity requirements on colab)












you might try very simplistic images that mostly have only a texture or only a pattern or just one style so the ml can transfer just one thing, won't produce as "realstic" results but would probably look cool

Adbot
ADBOT LOVES YOU

echinopsis
Apr 13, 2004

by Fluffdaddy
hmm yeah: they all just look like style transfer. tempted to blend back with the original so effect is more subtle

but yeah I was thinking i could maybe introduce a style somehow so my renders would look less rendered and more drawn. but coming to conclusion this isn’t the way

crepeface
Nov 5, 2004

r*p*f*c*
op have you tried the prisma app

echinopsis
Apr 13, 2004

by Fluffdaddy
is that the one that does more pixel based effects rather than anything “neural”?

kinda like loving around with pixels but not sure the best way

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


https://twitter.com/seagull81/status/1320132156774023168

fack you
Sep 12, 2002

For Life
so like you know in star trek all the doors on the enterprise slide open, but it was just some person pulling on wires or something at the right time in the script, not a real automatic door, so it appears to work not on motion, but like a person's intent because sometimes there'd be a whole conversation right next to a door and it'll magically open when riker is done chatting up troi or whatever ugh. could you feed in all the ridiculous footage into all this machine learning poo poo and end up with software that'd actually open a door based on intent just like how it appears on the show I don't know

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice

fack you posted:

so like you know in star trek all the doors on the enterprise slide open, but it was just some person pulling on wires or something at the right time in the script, not a real automatic door, so it appears to work not on motion, but like a person's intent because sometimes there'd be a whole conversation right next to a door and it'll magically open when riker is done chatting up troi or whatever ugh. could you feed in all the ridiculous footage into all this machine learning poo poo and end up with software that'd actually open a door based on intent just like how it appears on the show I don't know

Sure, as long as you define ML and AI as "paying 3rd world people pennies to watch video and push button when ready".

Cybernetic Vermin
Apr 18, 2005

fack you posted:

so like you know in star trek all the doors on the enterprise slide open, but it was just some person pulling on wires or something at the right time in the script, not a real automatic door, so it appears to work not on motion, but like a person's intent because sometimes there'd be a whole conversation right next to a door and it'll magically open when riker is done chatting up troi or whatever ugh. could you feed in all the ridiculous footage into all this machine learning poo poo and end up with software that'd actually open a door based on intent just like how it appears on the show I don't know

i think this is one of those cases which some ml model would do pretty well yeah. not like fully star trek magical, but differentiating between someone going in, someone standing still, and someone walking past should be doable in a fairly accurate way.

and is actually not a bad idea, the amount of energy lost to automatic doors opening unnecessarily in winters and such probably adds up considering the relative simplicity of implementation.

Sagebrush
Feb 26, 2012

all the people on star trek have universal translator brain implants so maybe they actually do read door-opening intent and broadcast it to the door.

they also never pause in front of the door to let it open, just stride straight through and expect it to be ready as they get there, so i wonder how often people in star trek show up with a broken nose because the doors are malfunctioning

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



lmao there's no way that would work irl without false negatives/positives

you could try converting all doors simultaneously while educating everyone on the new door paradigm, but still there'd be people with loopy strides or someone forgetting about the new doors & hesitating or whatever

even humans employed full time as doormen cant tell correctly 100%

Agile Vector
May 21, 2007

scrum bored



its in their comm badge with a connection back to the ship for the heavy lifting. they use a lost connection to the ship as a mcguffin lots of times

i imagine it as alexaprise is peeping in on all the conversations and collecting motion data from the badge since it does idk some health stuff probably, then figures out youre done getting burned by data as you leave engineering and opens the door

in other words bezos will be selling intent-based doors for home deliveries in 2023. also intent-based restocking as you reach for the second last cola in 2022

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



it would be like one of those comedy bits where two people keep starting a sentence at the same time and say no sorry what were you saying

Feisty-Cadaver
Jun 1, 2000
The worms crawl in,
The worms crawl out.
somebody post that gif of the seagull tricking the convenience store door sensor and running back out with a bag of chips

or maybe it was a duck I forget

Pile Of Garbage
May 28, 2007



Feisty-Cadaver posted:

somebody post that gif of the seagull tricking the convenience store door sensor and running back out with a bag of chips

or maybe it was a duck I forget

https://www.youtube.com/watch?v=Kqy9hxhUxK0

ultrafilter
Aug 23, 2007

It's okay if you have any questions.



Source

distortion park
Apr 25, 2011


https://www.wsj.com/articles/when-the-machines-learn-to-price-gouge-11601281879?redirect=amp#click=https://t.co/BDhFcxs8fs


quote:

In such markets where both stations appeared to adopt algorithmic software, as estimated by sudden changes in the size and rapidity of price changes, margins increased by an average of almost 30%. Without pricing software at both stations, margins were unchanged.

Notably this would be illegal of people were doing it - the computers presumably manage some sort of weird pseudo-communication through their price signals and end up forming a cartel!

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


BYOB has a thread where people visit threads from other forums and write a trip report. Someone got assigned this thread, and here's what they wrote:

beer pal posted:

u know how when you have a car you're all like yeah i get it... the gas blows up and that sends the pistons moving and then the wheels turn or whatever.. and then you take it to the mechanic and they say your screwjank is all rusted and your double boiler has got an oil leak int he drivecrank.. well thats what i got from the op of this thread. everybody knows what machine learning is its when the machines learn. even with the probably laymans explanation and sardonic tone all the computer words got me messed up. i read one computer word and its lights off up there. but anyway after that its people making small joke posts much like a yobbo would do on the subject of machine learning.

the tone from what i gather is mostly cynicism about the uses of machine learning (shady data collection & surveillance etc). theres a bunch of jokes about deep faking porn videos. another issue that comes up is the conception that algorithms, by the fact that they're run by computers and not done directly by humans, are inherently neutral / free of bias which is wrong & bad since if you teach computer to be racist, computer will be racist.

heres a couple posts in a row that i could understand and that were interesting to me:

The Management posted:

for those of you who haven’t worked with machine learning, here’s how it’s works:

* you take a bunch of pictures of shoes or horses or whatever the gently caress you want to recognize.
* then reduce those images to some ridiculous low resolution thing
* then you cram those images through some matrices until you get some matrix that happens to tell you horse or not horse.
* nobody knows why this matrix is horse, but it seems to work so you’re like, cool, I made a horse model.

now you take your horse model and set out to detect some horses. you take some pics and run them through your model and it’s all here a horse, there a horse, no horse there. but then there’s one picture of a horse and your model says no, not horse. and you have no idea why it’s not horse. the matrix is just a bunch of 8-bit values, it doesn’t even look like a horse. the answer, which you will never discover, is that the picture has a reflection of the sun in the water under the horse, which is not where the sun goes and your model couldn’t deal with that.

and then you realize that your model doesn’t understand poo poo, it hasn’t learned anything. it is incapable of learning, it’s a loving filter.

Sagebrush posted:

To me the most damning example is those papers with adversarial techniques where they apply a really subtle filter to the image and it tricks the algorithm. Like you start with an image of a turtle, and the machine recognizes it as a turtle, and then you apply a tiny convolution that affects 10% of the pixels in an almost undetectable way, and the algorithm is now certain that it's a picture of a gun. But it still looks like a turtle to you.

So the takeaway is that whatever the algorithm is triggering on, it's absolutely not what humans think of as turtle-like features. It doesn't see things the way that humans do, doesn't have the same "mental" model of the world. That means you can't make any assumptions about its behavior that are based on how humans would treat a given situation. yet that's exactly what people do when they start pressing these systems into service in things like self-driving cars.

this all seems to agree with my conception prior to clicking on the thread that this kind of thing is 1) fake 2) nefarious or 3) both

This person gets it.

Cybernetic Vermin
Apr 18, 2005

how much is jeffrey paying them for that dreary review work?

Media Bloodbath
Mar 1, 2018

PIVOT TO ETERNAL SUFFERING
:hb:

Cybernetic Vermin posted:

how much is jeffrey paying them for that dreary review work?

well it is BYOB, so existence in itself is the reward.

big scary monsters
Sep 2, 2011

-~Skullwave~-
beer pal seems to have a solid understanding of machine learning, missing only this key image that i guess i probably got from this thread but i'm too lazy to find the post to quote

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

big scary monsters posted:

beer pal seems to have a solid understanding of machine learning, missing only this key image that i guess i probably got from this thread but i'm too lazy to find the post to quote



it's in the op, op

big scary monsters
Sep 2, 2011

-~Skullwave~-
i ran it throguh a nn and now it's bigger and more true

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Trabisnikof posted:

Nature Communications charges a $5,700 publishing fee too lol

Nature Comms is basically a scam. It's PLoS ONE with better brand recognition and faux selectivity for papers that aren't actually of general interest (Nature), which is apparently worth a 4x premium over just publishing in PLoS ONE or knockoff PLoS ONE (Scientific Reports). I guess if your tenure committee is full of idiots who only care about whether your publication record has lots of high impact factor numbers and contains the word "nature" it's worth it.

animist
Aug 28, 2018
maybe i should edit the OP to have less words. maybe just like "algorithms are out to get you." and that picture

animist
Aug 28, 2018
oh also, this short story got posted in the c-spam doomsday econ thread (i think??) and its really good. pretty solid examination of "but what if the machines turn on us???": https://rifters.com/real/shorts/PeterWatts_Malak.pdf

big scary monsters
Sep 2, 2011

-~Skullwave~-
a surprisingly optimistic ending for watts

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

why? do some of the humans survive?

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



fart simpson posted:

why? do some of the humans survive?

no

Surprise T Rex
Apr 9, 2008

Dinosaur Gum

animist posted:

oh also, this short story got posted in the c-spam doomsday econ thread (i think??) and its really good. pretty solid examination of "but what if the machines turn on us???": https://rifters.com/real/shorts/PeterWatts_Malak.pdf

Yeah this is cool because it sort of gets to the center of "how innocuous rules can cause unintended behaviour" which, at least for the near future is a much bigger concern than the old sci-fi AI plot of a machine attaining full sentience and deciding that people cause bad stuff like ecological collapse and deciding to purge us.

I always like the "paperclip maximiser" thing for that too.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


https://twitter.com/xkcd/status/1330309742535827467

Schadenboner
Aug 15, 2011

by Shine

Imagine having quoted a Randalltweet.

:allears:

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


https://twitter.com/albrgr/status/1333838686992044032

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!
artificial stupidity

qirex
Feb 15, 2001

you'd think that 20 years and counting of seo would have taught people the value of manually gaming a "smart" system

MononcQc
May 29, 2007

https://aws.amazon.com/blogs/aws/amazon-devops-guru-machine-learning-powered-service-identifies-application-errors-and-fixes/

ah yes this will go well

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE


source, via calling bullshit

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


https://twitter.com/rajiinio/status/1343278077229588481

Midjack
Dec 24, 2007



https://twitter.com/DeepCapybara/status/1344840846177427457

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang




yeah. it actually works out for all the things he puts in the square hole during training, so the problem is solved for all inputs in perpetuity

Adbot
ADBOT LOVES YOU

Sagebrush
Feb 26, 2012

we can optimize the solution by removing all these extraneous handlers. bonus time

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply