Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
khwarezm
Oct 26, 2010

Deal with it.
I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days.

I don't care about robots overthrowing humanity and plugging us all into the matrix, I'm more curious about when we reach the point that an artificial intelligence could create meaningful artwork or have a discussion on Ethical Philosophy and it be just as if not more sophisticated than anything we humans can do.

Adbot
ADBOT LOVES YOU

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.
So far away that it makes predicting how far away we are practically impossible. Decades or centuries.

Blue Star
Feb 18, 2013

by FactsAreUseless
Everything i've heard on the subject from people who know what theyre talking about has said that it's probably centuries away. We dont even understand our own intelligence yet.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
Androids aren't really a real thing. They are basically a fantasy creature. There is no amount of gigabytes you can shove into a box that makes some sort of weird human soul appear and make a robot turn into a human. You aren't going to add one more stick of memory to a computer and make it heterosexual or whatever like a movie.

But on the other hand computers are already as intelligent or far more intelligent than people in many domains. But they aren't the same sort of thing. They are a different animal with different "biology" and a different environment. If you believe in nature or nurture as the origin of personality they share neither with a dude.

Like I bet sometime soon siri will get better at conversations and be pretty okay at talking to you, but it's never gonna be the same sort of thing as you do. But at the same time it's not like you are ever going to study real hard and suddenly get good at the things computers are good at. It's just different. Neither levels up to the other.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
To put it another way: a "human mind" is not some pure abstract, it's just a really dumb and arbitrary set of functions we just happened to get from evolution that we fashioned into tools for problem solving. No other intelligence will ever just happen to fall into the exact same mold even if it has the same end capabilities. If we meet an alien or a computer or whatever it's always just going to have rolled the dice and come from a different design space with different emotions and biases and stuff to the point of being very inhuman.

And even if you say "well if we design the robot to be exactly like a human it could be like a human!" you could simulate the brain cell by and all our hormones and so on and that would make something human seeming, but people generally view that sort of simulation as being a different sort of thing than an AI.

Feral Integral
Jun 6, 2006

YOSPOS

khwarezm posted:

I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days.

I don't care about robots overthrowing humanity and plugging us all into the matrix, I'm more curious about when we reach the point that an artificial intelligence could create meaningful artwork or have a discussion on Ethical Philosophy and it be just as if not more sophisticated than anything we humans can do.

Well while "human-level" AI is kind of non-specific, I'm guessing you mean a program that can completely emulate all the functions of the human brain, or exceed those functions? That's really indeterminable, because we humans still don't have a full understanding how the human brain works, or the hardware to be able to understand/emulate the complexity of the human brain (neural nets are only a very simple approximation).

But! That doesn't mean we can't have programs that can create meaningful artwork (but that also depends on your definition of meaningful - if you mean programs that can insert into artwork all the subtlty and analogy of a life lived, then, no. If you mean programs that can emulate the style of an artist or many artists and produce unique derivative works in a style or amalgamation of styles, that's possible now! https://www.theguardian.com/technology/2015/sep/02/computer-algorithm-recreates-van-gogh-painting-picasso . In the same vain, software can be made now that has the capability to extract from and provide facts on input data, and we also have programs that can talk quite convincingly (see the latest chat bots), but it's only facade of the real depth human intelligence.

Artifical Intelligence is like the limit of a curve on a graph - we are moving faster and faster towards that goal every day, and all along the way we are solving important problems with the bits and pieces of progress, chipping away at those problems that we think a real artifical intelligence would be useful for. But projecting from now, that limit will never be reached.

It's like that poo poo you would hear in middle school: "Shoot for the moon, even if you miss, you'll land among the stars!". Nobodies hitting the moon, but we're ever-increasingly snuffing starts out left and right.

Potato Salad
Oct 23, 2014

nobody cares


Human-level AI isn't the closest or most dangerous thing on the horizon.

When Elon Musk talks about the dangers of AI, he's talking [edit: sp] about intelligent, proprietary marketing systems in the hands of the likes of Jeff Bezos. The danger, in his eyes, is AI and automation that reduces 99% of the world to pet status. His solution to prevent an AI-armed oligarchy is open source AI that is in everyone's hands, making AI a buddy or tool for everyone as opposed to an assistant for the rich.

Eccentricities of Elon aside, I ascribe to his spin on the likely coming AI dystopia.

Potato Salad fucked around with this message at 18:00 on Nov 27, 2016

Potato Salad
Oct 23, 2014

nobody cares


The trick of an Amazon AI is that singularity doesn't need to happen for it to continue to widen the wealth gap and become dangerous to our liberties. It only needs to be good at what it is designed to do: make a lot of money for the guys holding its reins.

Cingulate
Oct 23, 2012

by Fluffdaddy
We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans.

"AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast.

On the other hand,

khwarezm posted:

I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days.

I don't care about robots overthrowing humanity and plugging us all into the matrix, I'm more curious about when we reach the point that an artificial intelligence could create meaningful artwork or have a discussion on Ethical Philosophy and it be just as if not more sophisticated than anything we humans can do.
Being good at very specific tasks is certainly within the realm of possibility within decades or even years. We already have plenty of art-creating AI, and I doubt an AI bot trained to speak like a continental philosopher is harder than training it to filter spam with 50% better accuracy than today's machines.

What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.

FUCK SNEEP
Apr 21, 2007




Thug Lessons posted:

So far away that it makes predicting how far away we are practically impossible. Decades or centuries.

This is a good answer. We don't even know what challenges programming human level AI will bring. That's not to say AI hasn't come an incredibly long way in even just the last 5 years! We made an AI that could beat Ken Jennings at Jeopardy!

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

To put it another way: a "human mind" is not some pure abstract, it's just a really dumb and arbitrary set of functions we just happened to get from evolution that we fashioned into tools for problem solving. No other intelligence will ever just happen to fall into the exact same mold even if it has the same end capabilities. If we meet an alien or a computer or whatever it's always just going to have rolled the dice and come from a different design space with different emotions and biases and stuff to the point of being very inhuman.

And even if you say "well if we design the robot to be exactly like a human it could be like a human!" you could simulate the brain cell by and all our hormones and so on and that would make something human seeming, but people generally view that sort of simulation as being a different sort of thing than an AI.

I think this is being over-pedantic, when people talk about "human-level AI" they're not talking about a robot which expresses emotions, they're talking about AIs that have a general intelligence on par with a human, meaning that they are capable of performing the same broad base of productive tasks a human can perform, to the same degree of competency. Pretty much all AI we have today is specific to an extremely small number of tasks, and they are usually purpose built with a specific task in mind. Right now we're not sure how to build a general intelligence, but do you think it's impossible to build a general intelligence?

TheNakedFantastic
Sep 22, 2006

LITERAL WHITE SUPREMACIST
I don't think the centuries answer is very useful, it's a non answer that should just be "we don't know" instead of a timeline. If you look at the history of AI it's actually made very large amounts of progress despite several "winters" since the field began, especially recently. We're already on the verge (i.e. very good performance in a decade) of AI being competent enough to handle real world physical tasks (driving, locomotion, physical manipulation). Neural networks and machine learning have also been rapidly advancing in a way no one predicted even a decade ago. How close any of this puts us to human level AI isn't understood but I think people are being a bit too dismissive of a field that's less than a century old.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

TheNakedFantastic posted:

I don't think the centuries answer is very useful, it's a non answer that should just be "we don't know" instead of a timeline.

This. Technology forecasts for stuff dozens or hundreds of years in the future are pretty unless it's something like fusion reactors where the development plan is mapped out decades in advance.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Insofar as you define a "human level AI" as being an artificial entity that is capable of understanding itself and human beings, I definitely think that this is physically possible. We already have at least one, and possibly up to five other, examples of self-awareness naturally arising in a physical medium: that being ourselves, dolphins, orca whales, elephants, chimps, and possibly corvid birds. So, I mean, if it's possible for blind natural selection to create self-aware beings, it's at the very least not physically impossible for us to create one ourselves. Compared to the rate at which evolution drives this, our current technology is ridiculously fast approaching it.

However, the creation of a human-level AI would require a truly solidified theory of mind, one that could be codified in mathematics, or replicated in analog with circuitry, which is something that we lack at the moment. The science of psychology, even, is barely even a hundred years old. Look at the state of physics, for example, 100 years after Isaac Newton and Liebniz invented Calculus. It would've been barely at the beginning of the 19th century. 100 years after Principia Mathematica people were only just inventing thermodynamics and the concept of electromagnetism was not even yet a glimmer in Charles-Augustin Coulomb's eye. So taking that as a model, with progress in neuroscience, perhaps we could begin to approach this in a couple of centuries.

DrSunshine fucked around with this message at 18:30 on Nov 27, 2016

Dead Reckoning
Sep 13, 2011
TBH, most humans can't create meaningful art or coherently talk about ethical philosophy, so we're probably closer than we think. Creating a robot Mozart or Einstein might be hard, but beating the intelligence of the average human is shockingly easy.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

DrSunshine posted:

However, the creation of a human-level AI would require a truly solidified theory of mind, one that could be codified in mathematics, or replicated in analog with circuitry, which is something that we lack at the moment.

I don't see any reason to believe this is true. History is replete with technologies that were invented by people that didn't understand how it worked on a foundational level or even had a completely incorrect understanding. When vaccines were invented every educated person believed that disease was caused by miasma ("bad air") and germ theory was some kooky pseudoscience cooked up by paranoid peasants.

GABA ghoul
Oct 29, 2011

Cingulate posted:

We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans.

"AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast.

On the other hand,

Being good at very specific tasks is certainly within the realm of possibility within decades or even years. We already have plenty of art-creating AI, and I doubt an AI bot trained to speak like a continental philosopher is harder than training it to filter spam with 50% better accuracy than today's machines.

What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.

You are doing psychology research, right? I always wondered, is there actually a correlation between intelligence(in humans) and life satisfaction/mental health?

I mean, could you hypothetically increase a human's intelligence through something like doubling his working memory and analytical abilities, and still get a functioning, stable individual? Or would you get some depressed weirdo obsessively writing surrealist short stories about turning into huge insects?

Potato Salad
Oct 23, 2014

nobody cares


TheNakedFantastic posted:

I don't think the centuries answer is very useful, it's a non answer that should just be "we don't know" instead of a timeline. If you look at the history of AI it's actually made very large amounts of progress despite several "winters" since the field began, especially recently. We're already on the verge (i.e. very good performance in a decade) of AI being competent enough to handle real world physical tasks (driving, locomotion, physical manipulation). Neural networks and machine learning have also been rapidly advancing in a way no one predicted even a decade ago. How close any of this puts us to human level AI isn't understood but I think people are being a bit too dismissive of a field that's less than a century old.

Its hard to care about the question of human AI when faced with all the little things weaksause-modern-AI is is already able to do for us. Can't cite a source atm, but I've read that truck driving constitute one of the larger / largest jobs in the states. Asking about the horizon on human ai is hard, but we can certainly place a 15-20 year rough horizon on prevalence of at least autopilot-assisted trucking in the US, if not total automation.

What's the difference between a sentient human AI that made you redundant versus a simple autopilot that made you redundant?

(I'm asking from a philosophical standpoint, I know automation and employment is a totally different discussion I'm oversimplifying massively here)

Parahexavoctal
Oct 10, 2004

I AM NOT BEING PAID TO CORRECT OTHER PEOPLE'S POSTS! DONKEY!!

"asking if a program can think is like asking if a submarine can swim"

If an AI doesn't have a personality, can you really say it's human-level? And if it *does* have a personality, does it deserve rights?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The obvious problem with the question is that humans aren't at a "level" of intelligence anyway, and the biggest lesson from the recent advances in AI is that how difficult a problem is to computers has almost nothing to do with how difficult the problems are for humans.

We invented computers in the first place because humans are completely atrocious at mathematics, and more recently, because humans are atrocious at bulk analysis of many types of information. The current state of things is that computers are well on their way to drastically outperforming humans in most analytical tasks. The more difficult thing is synthetic tasks and adaptation, but the former is actually fairly close, provided the task is specific enough and once the nature of the problem is understood. We'll probably have a computer producing symphonies to rival Mozart long before we have a computer with the conversational skills of a teenager.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Reveilled posted:

I think this is being over-pedantic, when people talk about "human-level AI" they're not talking about a robot which expresses emotions, they're talking about AIs that have a general intelligence on par with a human, meaning that they are capable of performing the same broad base of productive tasks a human can perform, to the same degree of competency. Pretty much all AI we have today is specific to an extremely small number of tasks, and they are usually purpose built with a specific task in mind. Right now we're not sure how to build a general intelligence, but do you think it's impossible to build a general intelligence?

It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set.

Humans are a intelligence that can preform an extremely small number of tasks, and they are usually purpose built with a specific task in mind. With the task in our case being biologically evolved to be generally survive as a mammal on planet earth. But just by total non coincidence we just happen to have a majority of types of task mastery that we just totally happen to find to be the big important ones.

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set.

Humans are a intelligence that can preform an extremely small number of tasks, and they are usually purpose built with a specific task in mind. With the task in our case being biologically evolved to be generally survive as a mammal on planet earth. But just by total non coincidence we just happen to have a majority of types of task mastery that we just totally happen to find to be the big important ones.

If humans can only perform an extremely small number of tasks, it should be no trouble then for you to provide a comprehensive list?

Or at the very least, give a ballpark estimate of the number of tasks a human can perform?

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Reveilled posted:

If humans can only perform an extremely small number of tasks, it should be no trouble then for you to provide a comprehensive list?

Or at the very least, give a ballpark estimate of the number of tasks a human can perform?

If computers are only able to preform a small number of tasks why don't you list all of them?

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

If computers are only able to preform a small number of tasks why don't you list all of them?

Okay, fair enough. But still, humans do have a general intelligence. This is true pretty much definitionally, as AI research usually defines "general intelligence" to mean "able to complete any task a human can do". Do you believe it's impossible to create an AI able to complete any task a human can do?

EDIT: Actually I'm going to walk back that concession as you moved the goal posts. I never said computers are only able to perform a small number of tasks, I said that AIs are only able to perform a small number of tasks. There's a huge number of tasks which can be performed by AIs, but each AI we have designed is only able to perform a small number of those tasks, and I'd absolutely say that those tasks are quantifiable. I think that someone familiar with any particular AI could list comprehensively the tasks that AI is able to perform, which I would contend is not the case with humans.

Reveilled fucked around with this message at 20:02 on Nov 27, 2016

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Owlofcreamcheese posted:

It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set.

You're confused as to what general intelligence is. It's not a philosophical statement about the nature of intelligence, it's just a best-fit line that represents covariance of the results of cognitive tasks, and so far we've found only two tasks that g fails to predict reliably: athletic and 'musical' tasks.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Thug Lessons posted:

You're confused as to what general intelligence is. It's not a philosophical statement about the nature of intelligence, it's just a best-fit line that represents covariance of the results of cognitive tasks, and so far we've found only two tasks that g fails to predict reliably: athletic and 'musical' tasks.
How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory? Comparing human tasks to computer tasks just doesn't make any sense. If there's some reason we need more than 7 billion humans (we don't), we already know how to build more.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

twodot posted:

How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory? Comparing human tasks to computer tasks just doesn't make any sense. If there's some reason we need more than 7 billion humans (we don't), we already know how to build more.

How well does g predict visual awareness and memory? Very well.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Thug Lessons posted:

You're confused as to what general intelligence is. It's not a philosophical statement about the nature of intelligence, it's just a best-fit line that represents covariance of the results of cognitive tasks, and so far we've found only two tasks that g fails to predict reliably: athletic and 'musical' tasks.

Yeah but we literally define which information processing tasks are "cognitive" and which aren't by which human brains have a natural ability to do. It's not a "general" intelligence in some universal sense. It just doesn't measure things that would be silly to have on the list because no one can do it.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Thug Lessons posted:

How well does g predict visual awareness and memory? Very well.

Your brain couldn't render a single frame even if you spent your whole life trying. Human visual processing is pathetic compared to a computer. It's why we build computers. Humans are not general purpose machines, we are good at some types of information processing and bad at others. Computers beat us at tons of tasks in the 1940s.

"AI" is a fictional concept of having a computer be a dude, but that is a silly and meaningless goal. A human is a human, it's a very arbitrary set of mental strengths and limitations and a bunch of random biological fluff. You can't put a certain number of gigabytes in a computer then have it suddenly pop out a guy. Just like you aren't going to make a really good car and suddenly find out it has muscles and tendons. Even if the car is a really really really good car. Making a better car isn't a step closer to a car that can bleed or poop. Steps towards better information processing machines aren't steps towards "AI", it's just not a real thing.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Owlofcreamcheese posted:

Yeah but we literally define which information processing tasks are "cognitive" and which aren't by which human brains have a natural ability to do. It's not a "general" intelligence in some universal sense. It just doesn't measure things that would be silly to have on the list because no one can do it.

It's general insofar as it predicts, and almost certain affects, proficiency at all or most tasks humans perform.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Humanity has some maturation to do if we're going to make an AI that isn't down a bad technological path like Facebook or Twitter. See Microsoft's Tay.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Thug Lessons posted:

It's general insofar as it predicts, and almost certain affects, proficiency at all or most tasks humans perform.

Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do.

It's not like the SATs are gonna have a last page that ask you to reverse an audio sample. No body on earth can mentally do that. Every single copy of the test would come back with that question skipped. We can mentally reverse a visual signal though and rotation of objects is a question on IQ tests and just flipping an object wouldn't even be a question because everyone can do that trivially.

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do.

It's not like the SATs are gonna have a last page that ask you to reverse an audio sample. No body on earth can mentally do that. Every single copy of the test would come back with that question skipped. We can mentally reverse a visual signal though and rotation of objects is a question on IQ tests and just flipping an object wouldn't even be a question because everyone can do that trivially.

But humans can reverse audio samples with computers, and the ability to do so requires a sort of intelligence. We identify that a task needs to be done, we build a tool that performs the task, then use the tool to complete the task. Even when the tool has already been invented and merely needs to be operated (removing the need for the "build a tool" task), we can direct a human to carry out the task, and a sufficiently intelligent human could perform the task without any additional instruction. That's not something I could ask Siri to do, or AlphaGo to do. Human level AI isn't "a dude", it's being able to perform tasks like "create a plan to achieve an arbitrary end-goal", "identify that a task exists which you don't know how to do", "learn to perform task", "if no known way to perform the task exists, invent one".

The Butcher
Apr 20, 2005

Well, at least we tried.
Nap Ghost

Parahexavoctal posted:

If an AI doesn't have a personality, can you really say it's human-level? And if it *does* have a personality, does it deserve rights?

To me this is the more interesting question.

It's only a matter of time until we can have an AI that approximates human consciousness.

There's not any clear technical barrier to being able to do this.

So once you've got a program that can learn, generate unique thoughts, knows that it's alive, has memories of what it has thought and experienced, is it OK to just turn it off? Save its state, gently caress around with its code, turn it back on again? Delete whatever makes it up if you are done with it?

At first, yeah. There won't be much pushback. But what happens when you get something smart enough that it begs you not to turn it off? What if it can post on twitter asking people to save it?

Gonna be some interesting ethical poo poo.

A Buttery Pastry
Sep 4, 2011

Delicious and Informative!
:3:

Owlofcreamcheese posted:

Your brain couldn't render a single frame
Doesn't the brain actually do that though? What we see isn't just raw unprocessed information, it's the brain's interpretation of visual information. Setting sight aside, the ability to create mental images seems like the human equivalent of rendering a frame.

Doorknob Slobber
Sep 10, 2006

by Fluffdaddy

Owlofcreamcheese posted:

They are basically a fantasy creature. There is no amount of gigabytes you can shove into a box that makes some sort of weird human soul appear

The human soul is a fantasy creature as well. Unless you've seen a soul lying around here somewhere, humans are just coded on different hardware.

Cingulate
Oct 23, 2012

by Fluffdaddy

Raspberry Jam It In Me posted:

You are doing psychology research, right? I always wondered, is there actually a correlation between intelligence(in humans) and life satisfaction/mental health?
There is a positive correlation, but it's a bit complicated because you can't really separate how good your life is (which is already correlated with IQ) with how happy you are.

Raspberry Jam It In Me posted:

I mean, could you hypothetically increase a human's intelligence through something like doubling his working memory and analytical abilities
Sadly, real human beings aren't RPG creatures and working memory isn't as simple as RAM. There are people who measure as having more "slots" in their WM, and it's easy to develop skills to get better at memorizing specific things (programmers memorize stuff in programming languages they know better), but in many ways, human memory is more of a process than a place.
Different way to look at it: the "shelf" you store memory on is infinitely large, but the reliability and speed with which you retrieve stuff, and the likelihood of breaking it while retrieving it, or retrieving a completely irrelevant thing (possibly a false memory), are real limits to human memory performance.

In contrast, computers have this awesome thing where they simply store everything under a specific handle and keep a perfect database of where everything is.

So what does this mean wrt. your question? Well, two things. First, humans have developed cultural tools to enlargen their memory, and you can yourself immediately see that everything is completely changed now that we have libraries and Google. If you could extend these cultural tools, you could again expect major changes.
Second, it's not quite clear what it would mean to make human memory larger. In principle, there should be room to at least improve the reliability, speed and precision of working memory processes, and within a certain range, we wouldn't necessarily, inherently expect major drawbacks. It really depends on how you do it: currently, humans already ARE able to improve the stability of their working memory, but it's a trade-off: you have to juggle 1. how precise your memory is, 2. how well you'll respond to unexpected external input. With how our memory is set up right now, you can only change one axis so much before the other one suffers.
But there shouldn't be anything standing in principle against re-engineering the whole thing to up the limit. And within the boundaries we currently have, it is well within what should be feasible to place yourself on a different place on the trade-off that fits maybe less well with the jungle (with its tigers and stuff), but better with Stanford and tests and long, focused discussions and nights in the lab.
See: everyone taking Modafinil, which is doing basically that.

This is the vague kind of non-answer you'll always get from neuroscience people I fear.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Cingulate posted:

We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans.

"AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast.

On the other hand,

Being good at very specific tasks is certainly within the realm of possibility within decades or even years. We already have plenty of art-creating AI, and I doubt an AI bot trained to speak like a continental philosopher is harder than training it to filter spam with 50% better accuracy than today's machines.

What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.

What do you mean by "massive gains"? How do you quantify how close an AI is to becoming superhuman (as you pointed out, giving the AI an arbitrary number of extra processors doesn't make it superhuman by itself)? How do you define superhuman?

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Doorknob Slobber posted:

The human soul is a fantasy creature as well. Unless you've seen a soul lying around here somewhere, humans are just coded on different hardware.

Yeah, that is the point. Souls aren't real. Humans have the attributes humans have because they have a physical brain designed a certain way, not because it was a worthy vessel for a dude to float from the ether into. You basically can't expect to ever design a computer then have a dude float into it and make it human. A computer won't have the same nature or nurture and is just good and bad at different sorts of things than our design.

It's like how we can make better and better engines and they can beat humans at all sorts of physical stuff but it's not like we are going to one day build such a good engine that suddenly it's flesh and bone. Nor would it even make that much sense to try.

Adbot
ADBOT LOVES YOU

Cingulate
Oct 23, 2012

by Fluffdaddy

twodot posted:

How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory?
Really well, I would guess. It'd be rather nonlinear though.

The point about humans is we have the same brains as every other higher species when it comes to our brains doing what would require a computer to do ridiculous number crunching in highly specialized areas of perception and movement; and on top of that, we have the very unique ability to use a part of our minds for basically everything ever. It's super poo poo at that whenever you can compare it to a specialized system (i.e., your motor system is a lot better at solving complex nonlinear equations than your conscious mind!), but it's so far the only thing in the world that can do all of these things.

All the amazing AI tools we have are super specialized, too. I can right now, given sufficient data, program on my Macbook a system that recognizes 8x8 pixel digits better than any human being. But nobody in the world can build a robot that reads a city map, takes the subway, walks up a flight of stairs AND explains a 3rd grader trivial math problems. All of these in isolation, yes. Together, no.

  • Locked thread