Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Admiral Ray
May 17, 2014

Proud Musk and Dogecoin fanboy

Three Olives posted:

I went off on this a many pages ago and now you have gotten at the core of how stupid Tesla's/Musk's view of Ai driving is. Yes, stereoscopic vision calculation is a WAY of calculating a 4D vision environment that we thought made a lot of sense because it worked often and we have two eyes. Our modern understanding of human vision is that it probably doesn't work like that at all. Human vision probably isn't a thing at all in the sense that we perceive it, it's probably like a gazillion different modelings of reality mashed together and constantly corrected, huge amounts of potential perception constantly discarded as completely useless. Just personal example, I use monocular vision, purposefully have different vision depth in each eye. I see just FINE, I have perfectly fine depth vision, I drive just fine, I can catch something thrown at me, I am loving blind if you cover my right eye for anything more than a few feet away from me at best.

The truth is we all probably possess just a staggering amount of object permanence information that is just completely imperceptible to us outside of how we experience it. There is actually probably very little that you need to actively perceive about your environment, you probably mostly just need to know that your house isn't on fire or your co-worker isn't running at you trying to murder you with a knife to perceptively navigate your home/office environment even though it all looks perfectly filled in down to the woodgrain and stain on the floor.

I'm honestly shocked that we haven't reached this market view yet, but the truth is the "rational markets" at this point would probably rather have you assume the costs of your office space in the form of a home office and spend $1,000 putting a 4k video conferencing device in it than spend a bunch of money figuring out how to drive you to an office space that they realized they can stick you with the bill for.

Just give it a few months, companies will start talking about how they can give us a $50 toner cartridge and Zoom display lease and cut their facility budget by $400 a month and act like they are doing us a HUGE FAVOR.

to be serious about this (instead of just shitposting that one cone = one point in space and spherical cones = more better), the amount of post-processing our brains do to make sense of the world cannot be overstated. keeping it to vision: there are specific structures in the brain that handle specific analysis tasks from the visual information stream. the things we end up seeing aren't the mishmash of many different models, but the result of hundreds of millions of neurons that have specialized tasks. as an example, whether of not we recognize an object as upright or not is dependent on specific neurons that are oriented in such a way that they fire when the visual input they get matches their orientation. if we went through a brain and burned those neurons out it would be impossible to "see" the object in that orientation. we would still know it was there (because of permanence), but we simply wouldn't understand it until we tilted our heads. similarly, there's a specific area that seem dedicated to recognizes faces (fusiform face area). when it is damaged by a stroke or suppressed by an implant/magnetic stimulation, we cannot understand what a face is. we can still see mouths and noses and eyes and ears, but they don't make any sense together. it's somewhat like the images that machine learning algorithms produce that seem to have recognizable parts at first glance but have no discernible whole.

the models the brain "builds" aren't explicit, nor are they competing. a good cognitive neuropsych saying that goes towards explaining this is "neurons that fire together, wire together". we have neurons in many different orientations, responsive to different things (whether direct visual information or processed information), and the ones that start off firing independently but are collectively activated by the same input signal become associated and will eventually be able to trigger one another in a complex loop of feedback, integration, and feedforward.

this is all to say that we perceive a lot about our environments. that information is constantly updated, checked against other inputs, and feedback and feedforward produced. it's not so much that a million models are mashed together, but that the brain is the model. our direct visual information stream doesn't need to be the best because of the post-processing performed, but that's because there's a lot more information stored a sequence of images than we tend to believe since we don't actively perceive it. none of the information we receive is discarded as useless, but it may be deprioritized over other information during an early stage of processing, like the bear in the moonwalking bear awareness test.

Adbot
ADBOT LOVES YOU

Karl Sharks
Feb 20, 2008

The Immortal Science of Sharksism-Fininism

so those neurons are just a bunch of if else statements, got it, let's make the code

Bideo James
Oct 21, 2020

you'll have to ask someone else about the size of her cans

Zazz Razzamatazz posted:

I'm no engineer type, but it seems to me if they just made that offloading area a little bigger they could do away with the inner offload zones and the crosswalk... The right hand lane for offloading and the left hand lane for passing through- fewer pedestrians hit and no cars stopping to wait for crosswalk traffic or other cars.

human death is part of engineering. gotta factor it in or else its not efficient.

FORUMS USER 1135
Jan 14, 2004

Karl Sharks posted:

so those neurons are just a bunch of if else statements, got it, let's make the code

code:
:Start
if (Car)
{
    LoveIt;
}
else
{
    GOTO Start;
}
ai's done, let's launch some satellites now

Divot
Dec 23, 2013
Tesla Solar business lookin' good

https://twitter.com/dadsoutrunning/status/1333804910295433226?s=20
https://twitter.com/dadsoutrunning/status/1333804912484909056?s=20
https://twitter.com/dadsoutrunning/status/1333804915085377536?s=20
https://twitter.com/dadsoutrunning/status/1333804917799092228?s=20

Bideo James
Oct 21, 2020

you'll have to ask someone else about the size of her cans

its been like this the entire time. every solar company in Northern California is built into this same scam.

happyhippy
Feb 21, 2005

Playing games, watching movies, owning goons. 'sup
Pillbug

Needs ejector seats.
Just dumps you on the sidewalk instantly.
Make it so Elon!

WhyteRyce
Dec 30, 2001

Bideo James posted:

its been like this the entire time. every solar company in Northern California is built into this same scam.

My process went extremely smooth and quick but I avoided Solar City and anyone setup in a booth in Costco/Home Depot trying to sell poo poo

World War Mammories
Aug 25, 2006


Admiral Ray posted:

to be serious about this (instead of just shitposting that one cone = one point in space and spherical cones = more better), the amount of post-processing our brains do to make sense of the world cannot be overstated. keeping it to vision: there are specific structures in the brain that handle specific analysis tasks from the visual information stream. the things we end up seeing aren't the mishmash of many different models, but the result of hundreds of millions of neurons that have specialized tasks. as an example, whether of not we recognize an object as upright or not is dependent on specific neurons that are oriented in such a way that they fire when the visual input they get matches their orientation. if we went through a brain and burned those neurons out it would be impossible to "see" the object in that orientation. we would still know it was there (because of permanence), but we simply wouldn't understand it until we tilted our heads. similarly, there's a specific area that seem dedicated to recognizes faces (fusiform face area). when it is damaged by a stroke or suppressed by an implant/magnetic stimulation, we cannot understand what a face is. we can still see mouths and noses and eyes and ears, but they don't make any sense together. it's somewhat like the images that machine learning algorithms produce that seem to have recognizable parts at first glance but have no discernible whole.

the models the brain "builds" aren't explicit, nor are they competing. a good cognitive neuropsych saying that goes towards explaining this is "neurons that fire together, wire together". we have neurons in many different orientations, responsive to different things (whether direct visual information or processed information), and the ones that start off firing independently but are collectively activated by the same input signal become associated and will eventually be able to trigger one another in a complex loop of feedback, integration, and feedforward.

this is all to say that we perceive a lot about our environments. that information is constantly updated, checked against other inputs, and feedback and feedforward produced. it's not so much that a million models are mashed together, but that the brain is the model. our direct visual information stream doesn't need to be the best because of the post-processing performed, but that's because there's a lot more information stored a sequence of images than we tend to believe since we don't actively perceive it. none of the information we receive is discarded as useless, but it may be deprioritized over other information during an early stage of processing, like the bear in the moonwalking bear awareness test.

alright gently caress it I'm diving back in. good post.

if you can only know just one thing about neurology, "neurons that fire together wire together" (so-called "Hebbian learning") is probably the best choice. pretty much every property and activity of neural tissue derives from applying this concept over and over.

the layering of the visual system is incredibly interesting. the whole idea about processing happening at every point in the information stream is true. information processing even happens literally in your eyes. the way your retina is physically constructed is the rods/cones fire when a particular wavelength of light hits a chemical called retinal, derived from vitamin A, which makes a bond in it physically flip - giving the molecule a different shape, which I mean literally - that starts a whole cascade of things that results in the cone/rod sending a signal.

related asides:
- the actual thing that rods and cones respond to is darkness. in the dark they're firing action potentials; light makes them quiet down.
- rods (the ones you use for night vision) are sensitive enough that you can see a literal single photon.
- vertebrate eyes are backwards, or inside out. the light-detecting cells are all the way at the back of the retina. the wiring I'm about to ramble about is in front of your light detectors, which means your brain has to do a lot of passive editing to greenscreen them out. this is especially weird because other species have eyes that make sense - octopi, for example, have the light detecting cells on the back interior of their eyes, rather than back exterior like we do, and the wiring is recessed behind them. our eyes are kind of like building a solar panel and then piling all the wires and power cables on top of it. this is also extremely strong evidence for evolution in that it suggests sight evolved separately many times.

anyway, as I said, processing of the raw "I saw a photon!!!" data from each individual rod/cone starts right away. amacrine cells, retinal ganglion cells, bipolar cells, and other kinds of neurons make up that retina wiring in front of your rods/cones and do a lot of heavy lifting. in particular they start to detect "center-on" and "center-off": places where either one rod/cone saw a photon and its surrounding neighbors didn't, or one rod cone saw nothing but most/all of its surrounding neighbors. by associating adjacent center-on and center-off spots (fire together, wire together), you start getting simple detection of edges/lines in the visual field before you've even left the eyeball. then that information gets sent to the back of your brain (oh yeah, that's another thing: your visual cortex is almost as far away from your eyes while still being in the brain as it's possible to be) where further associative layers start picking out orientations, as in this classic result from hubel and wiesel -
https://www.youtube.com/watch?v=Cw5PKV9Rj3o
- neurons that fire when there's specifically an edge on a specific spot of the visual field that's facing a particular direction. this kind of stuff propagates through several layers of visual cortex to build more complex discrimination - say, an edge moving in a specific direction in some range of speed. beyond this we know less. but there's good evidence that at some point this information stream splits into two, the so-called "what and where pathways", in which different parts of your brain divvy up the jobs of identifying what the hell you're looking at and how not to bump into things. this is thought to explain the extremely weird ways that cognitive deficits can present. for example, there are people who have two perfectly working eyes who, because of a particularly located lesion or whatever, will be completely unable to get the concept of leftness. you'll ask them to trace a picture that they can look at without issue, but they'll only do the right half of it and go "okay, done." or a somewhat famous patient who lost a chunk of her motion/"where" pathway and saw the world as essentially a slideshow: given a still image, she could say "okay, that thing is about ten feet away, that other one is behind it, maybe five feet more," but put her on the street and she would be literally unable to tell you which way people were walking, whether things were approaching her, even if the cars on the drat road were in motion. and then all this stuff about which we only know vague bits feeds into the frontal lobe's decision-making parts, and someone who figures that out has more important inventions to make than loving cars.

so to tie it back to the topic: musk thinks it'll be a breeze to simulate all this poo poo with a couple cheap cameras and black box machine learning into which you pump unreliable data to get out infallible conclusions. give me a loving break.

Proteus Jones
Feb 28, 2013



WhyteRyce posted:

My process went extremely smooth and quick but I avoided Solar City and anyone setup in a booth in Costco/Home Depot trying to sell poo poo

Thank you for validating that I'm correct in giving side-eye to the people trying to sell me garage door/roofing/windows/HVAC as I saunter out the door at Costco.

Tunicate
May 15, 2012

World War Mammories posted:

alright gently caress it I'm diving back in. good post.

if you can only know just one thing about neurology, "neurons that fire together wire together" (so-called "Hebbian learning") is probably the best choice. pretty much every property and activity of neural tissue derives from applying this concept over and over.

the layering of the visual system is incredibly interesting. the whole idea about processing happening at every point in the information stream is true. information processing even happens literally in your eyes. the way your retina is physically constructed is the rods/cones fire when a particular wavelength of light hits a chemical called retinal, derived from vitamin A, which makes a bond in it physically flip - giving the molecule a different shape, which I mean literally - that starts a whole cascade of things that results in the cone/rod sending a signal.

related asides:
- the actual thing that rods and cones respond to is darkness. in the dark they're firing action potentials; light makes them quiet down.
- rods (the ones you use for night vision) are sensitive enough that you can see a literal single photon.
- vertebrate eyes are backwards, or inside out. the light-detecting cells are all the way at the back of the retina. the wiring I'm about to ramble about is in front of your light detectors, which means your brain has to do a lot of passive editing to greenscreen them out. this is especially weird because other species have eyes that make sense - octopi, for example, have the light detecting cells on the back interior of their eyes, rather than back exterior like we do, and the wiring is recessed behind them. our eyes are kind of like building a solar panel and then piling all the wires and power cables on top of it. this is also extremely strong evidence for evolution in that it suggests sight evolved separately many times.

anyway, as I said, processing of the raw "I saw a photon!!!" data from each individual rod/cone starts right away. amacrine cells, retinal ganglion cells, bipolar cells, and other kinds of neurons make up that retina wiring in front of your rods/cones and do a lot of heavy lifting. in particular they start to detect "center-on" and "center-off": places where either one rod/cone saw a photon and its surrounding neighbors didn't, or one rod cone saw nothing but most/all of its surrounding neighbors. by associating adjacent center-on and center-off spots (fire together, wire together), you start getting simple detection of edges/lines in the visual field before you've even left the eyeball. then that information gets sent to the back of your brain (oh yeah, that's another thing: your visual cortex is almost as far away from your eyes while still being in the brain as it's possible to be) where further associative layers start picking out orientations, as in this classic result from hubel and wiesel -
https://www.youtube.com/watch?v=Cw5PKV9Rj3o
- neurons that fire when there's specifically an edge on a specific spot of the visual field that's facing a particular direction. this kind of stuff propagates through several layers of visual cortex to build more complex discrimination - say, an edge moving in a specific direction in some range of speed. beyond this we know less. but there's good evidence that at some point this information stream splits into two, the so-called "what and where pathways", in which different parts of your brain divvy up the jobs of identifying what the hell you're looking at and how not to bump into things. this is thought to explain the extremely weird ways that cognitive deficits can present. for example, there are people who have two perfectly working eyes who, because of a particularly located lesion or whatever, will be completely unable to get the concept of leftness. you'll ask them to trace a picture that they can look at without issue, but they'll only do the right half of it and go "okay, done." or a somewhat famous patient who lost a chunk of her motion/"where" pathway and saw the world as essentially a slideshow: given a still image, she could say "okay, that thing is about ten feet away, that other one is behind it, maybe five feet more," but put her on the street and she would be literally unable to tell you which way people were walking, whether things were approaching her, even if the cars on the drat road were in motion. and then all this stuff about which we only know vague bits feeds into the frontal lobe's decision-making parts, and someone who figures that out has more important inventions to make than loving cars.

so to tie it back to the topic: musk thinks it'll be a breeze to simulate all this poo poo with a couple cheap cameras and black box machine learning into which you pump unreliable data to get out infallible conclusions. give me a loving break.

also the red/green and blue/yellow color channels follow totally different paths for a large portion of processing

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

this is probably the best possible outcome from this story. for one thing, he still has a functional roof

World War Mammories
Aug 25, 2006


Tunicate posted:

also the red/green and blue/yellow color channels follow totally different paths for a large portion of processing

is that so? interesting, doesn't surprise me. learning biology is equal parts wonder at its beautiful complexity and disbelief that people don't spontaneously explode in a stiff breeze. I am now utterly incapable of believing in "intelligent" design.

also while I still can shout from my soapbox: if humanity manages to create AI before climate death, we won't program it. we'll grow it. babies are born with pretty much all the neurons they'll ever have, 100 billion or so (there are important exceptions - hippocampus in particular - but I'm glossing over those). on the other hand, which of babies or adults has more synapses - more connections between those neurons? babies - twice as many as adults! creating a sentient being is done by connecting everything willy-nilly and pruning away connections that aren't helpful. like starting with a block of marble and chipping away all the excess to make the statue inside, rather than building the statue by gluing pebbles together. and yet we've got bazillionaires convinced they're tony stark talking up their incredibly cool PebbleGlu which they're gonna use to build a space elevator out of gravel.

Karl Sharks
Feb 20, 2008

The Immortal Science of Sharksism-Fininism

infernal machines posted:

this is probably the best possible outcome from this story. for one thing, he still has a functional roof

and they still have their 32k

BMan
Oct 31, 2015

KNIIIIIIFE
EEEEEYYYYE
ATTAAAACK


World War Mammories posted:

- vertebrate eyes are backwards, or inside out. the light-detecting cells are all the way at the back of the retina. the wiring I'm about to ramble about is in front of your light detectors, which means your brain has to do a lot of passive editing to greenscreen them out

Fun fact, you can defeat this processing and see your own eye's blood vessels by shining a flashlight into your eye from the side and moving it around rapidly

slicing up eyeballs
Oct 19, 2005

I got me two olives and a couple of limes


BMan posted:

Fun fact, you can defeat this processing and see your own eye's blood vessels by shining a flashlight into your eye from the side and moving it around rapidly

trip report: holy poo poo

Lady Militant
Apr 8, 2020

The history of all hitherto existing society is the history of class struggles.

BMan posted:

Fun fact, you can defeat this processing and see your own eye's blood vessels by shining a flashlight into your eye from the side and moving it around rapidly

so thats what my character was doing everytime i went through a fog wall in DS2

Hillary 2024
Nov 13, 2016

by vyelkin

World War Mammories posted:

is that so? interesting, doesn't surprise me. learning biology is equal parts wonder at its beautiful complexity and disbelief that people don't spontaneously explode in a stiff breeze. I am now utterly incapable of believing in "intelligent" design.

also while I still can shout from my soapbox: if humanity manages to create AI before climate death, we won't program it. we'll grow it. babies are born with pretty much all the neurons they'll ever have, 100 billion or so (there are important exceptions - hippocampus in particular - but I'm glossing over those). on the other hand, which of babies or adults has more synapses - more connections between those neurons? babies - twice as many as adults! creating a sentient being is done by connecting everything willy-nilly and pruning away connections that aren't helpful. like starting with a block of marble and chipping away all the excess to make the statue inside, rather than building the statue by gluing pebbles together. and yet we've got bazillionaires convinced they're tony stark talking up their incredibly cool PebbleGlu which they're gonna use to build a space elevator out of gravel.

The more I hear about how the brain works the less likely it seems that we'll see AI in my lifetime.

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Hillary 2020 posted:

The more I hear about how the brain works the less likely it seems that we'll see AI in my lifetime.

What does that even mean though? You could ask ten people to define "AI" and get ten different explanations.

We already have assistants that some people think of as AI, like Alexa, even though they're not "intelligent" and certainly not sentient.

If you're talking about a fully sentient lifeform made from transistors, yeah, we're a long long way off. But things like Roombas are already becoming ubiquitous and more and more tasks will be automated every year.

Its important to separate current "weak" AI from the concept of "strong" AI, but I think its also important to look at things like a gradient instead of two distinct states.

Some tasks like driving are pretty complex and may require intelligence approaching the level of a human to really satisfy, but there's plenty of other tasks that are easier to solve we'll replace with what many will call "AIs" in the coming years.
Like the Google automated phone caller thing. Its pretty "stupid" but its still effective automation that you can use to handle a task for you, and lots of people will refer to that as an "AI" of a different sort.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

Zaphod42 posted:

Like the Google automated phone caller thing.

how's that going anyway?

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

infernal machines posted:

how's that going anyway?

IDK I haven't been following it too close but I'm pretty sure its solid enough that some people do use it.

Its pretty funny how we invented the telephone and then we invented text messaging and now phone calls are all just going to be robots talking to robots on our behalf because phone conversations are annoying.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
huh. i was under the impression that they had demoed it but it hadn't actually launched (for reasons). admittedly, i haven't been following too closely either.

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

infernal machines posted:

huh. i was under the impression that they had demoed it but it hadn't actually launched (for reasons). admittedly, i haven't been following too closely either.

Yeah I'm not entirely sure. Its definitely like them to talk big about something and then try to quietly make it disappear.

Just-In-Timeberlake
Aug 18, 2003
a first year CS student could write a roomba algorithm

Now an elevator algorithm is actually interesting.

Divot
Dec 23, 2013
I'm kind of of the opinion that 'Artificial Intelligence' is a misnomer in the sense that people use the term to ascribe porting functions the human brain is capable of to computers.

The fault of which is basically everything the neuroscientists types have been describing in the past page or so of this thread.

Since learning more about how computer science / programming actually works it seems obvious to me that the real 'Artificial Intelligence' should apply more to the things computers are good at that the human brain is not.

Like, say, encryption or databases or something.

Computers are far better at dealing with factoring prime numbers for RSA encryption than the human brain is. That sort of thing is the real 'Artificial Intelligence'.

But I guess you can't sell that sort of idea to investors in TYOOL 2020, because the foundation of what classical computers are truly good at compared to the human brain has been discovered decades ago.

Sudden Loud Noise
Feb 18, 2007

I used the automated phone answerer that Google has when I had a Pixel 2 and Google Fi. It worked well for filtering out spam without sending directly to voice mail because it does real time transcription of the call and you can pick up at any point if turns out to be a call that you actually want to take.

WhyteRyce
Dec 30, 2001

infernal machines posted:

huh. i was under the impression that they had demoed it but it hadn't actually launched (for reasons). admittedly, i haven't been following too closely either.

I completely forgot the Pixel had an automated phone screen feature until I was messing around with an old Pixel 3a. Had the phone since it launched and don't think that feature was ever used.

Just-In-Timeberlake posted:

a first year CS student could write a roomba algorithm

Now an elevator algorithm is actually interesting.

Isn't the roomba algorithm basically just run in a straight line until you hit something then randomly turn and try to go forward again

I saw the Knightman security bot (fancy roomba with cameras for security) hard lock when someone walked in front of it which caused it to stop and turn only to be cut off by another person walking a different direction. After that it just sat there and didn't move. Also even though the Kings had that security bot running around the main entrance, they still paid some other security guard to hang around in eyesight of the bot to keep an eye on it

WhyteRyce has issued a correction as of 01:40 on Dec 2, 2020

Dr. Fraiser Chain
May 18, 2004

Redlining my shit posting machine


Y'all gotta remember that math is a subset of rules, and math works within it's defined ruleset. A ton of science and engineering then takes math, and slaps it over reality while saying eureka but this is a cognitive mistake. The world isn't based on math. The description of the world isn't the world.

Plank Walker
Aug 11, 2005

Hillary 2020 posted:

The more I hear about how the brain works the less likely it seems that we'll see AI in my lifetime.

the more i hear about how the brain works, the more i think that tons of other existing equally or more complex systems probably are already exhibiting intelligent behavior that we just can't see

like, if intelligence can arise from a network of a few billion neural cells, what's keeping it from arising in a network of a few billion brains

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Plank Walker posted:

the more i hear about how the brain works, the more i think that tons of other existing equally or more complex systems probably are already exhibiting intelligent behavior that we just can't see

like, if intelligence can arise from a network of a few billion neural cells, what's keeping it from arising in a network of a few billion brains

or ant colonies, or geological formations, or radio waves....

Gestalt Intelligence :tinfoil:

Blackhawk
Nov 15, 2004

Plank Walker posted:

the more i hear about how the brain works, the more i think that tons of other existing equally or more complex systems probably are already exhibiting intelligent behavior that we just can't see

like, if intelligence can arise from a network of a few billion neural cells, what's keeping it from arising in a network of a few billion brains

If a cell isn't conscious of the brain it's part of then why do you think you'd be conscious of the greater body that you're a part of?

Hurt Whitey Maybe
Jun 26, 2008

I mean maybe not. Or maybe. Definitely don't kill anyone.

they don’t make money selling the solar panels (maybe they do idk it’s immaterial) they make money selling tax credits to investors. they got a big investment platform no one talks about where they sell the solar tax credits because obviously Tesla isn’t paying tax on not making money.

Colonel J
Jan 3, 2008

Hurt Whitey Maybe posted:

they don’t make money selling the solar panels (maybe they do idk it’s immaterial) they make money selling tax credits to investors. they got a big investment platform no one talks about where they sell the solar tax credits because obviously Tesla isn’t paying tax on not making money.

do you have some kind of article on that?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Divot posted:

I'm kind of of the opinion that 'Artificial Intelligence' is a misnomer in the sense that people use the term to ascribe porting functions the human brain is capable of to computers.

The fault of which is basically everything the neuroscientists types have been describing in the past page or so of this thread.

Since learning more about how computer science / programming actually works it seems obvious to me that the real 'Artificial Intelligence' should apply more to the things computers are good at that the human brain is not.

Like, say, encryption or databases or something.

Computers are far better at dealing with factoring prime numbers for RSA encryption than the human brain is. That sort of thing is the real 'Artificial Intelligence'.

But I guess you can't sell that sort of idea to investors in TYOOL 2020, because the foundation of what classical computers are truly good at compared to the human brain has been discovered decades ago.

What you’re describing is generally called Machine Intelligence, which is a subset of AI that focuses on the sorts of problem solving tasks that computers are especially good at like pattern matching, sorting/filtering, path finding, etc...

Zaphod42 posted:

Yeah I'm not entirely sure. Its definitely like them to talk big about something and then try to quietly make it disappear.

You can play with the Dialogflow API (which is what the Duplex demo was built on) for yourself if you have a GCP account and see how we you think it functions. It’s not a general purpose AI assistant, it’s really just an intent router. The purpose of dialogflow is to take a statement and parse the intent so that it can route the person to the correct function. So it needs to be able to understand that “I need to pay my bill,” “pay bill,” or “I’d like to make an account payment,” are all the same request but that “transfer me to Bill” or “billing address change” are different requests. It allows people to interact with chat bots in a more natural way vs the more common method of having to make a request using very specific language. However someone still needs to explicitly
program the outcomes for each intent. Dialogflow can do a reasonable job of determining intent (though it’s still got a lot of work to do) but it can’t do anything with that until a human programs a specific action to route that request to. So the Duplex assistant can can and make you an appointment not because it understands the concept of a human being who has a schedule of things throughout their day, some of which must be scheduled at specific times with specific individuals, but because it can parse enough natural language to resolve “make me an appointment at Tom’s barbershop for 11pm Tuesday” to a programmer-defined action to call a phone number and make an appointment (which is itself using dialogflow to understand the speaker on the other end and perform the required actions to make and confirm the appointment).

So on the one hand it’s extremely cool because natural language processing and particularly understanding intent are problems that we don’t really understand that well from a human perspective but ML algorithms have gotten passable at aping human capabilities...on the other hand these AI assistants are still dumb as poo poo because while they can parse a statement and resolve it down to a specific meaning like “place a call to this number” or “transfer them to this part of the call tree” it still doesn’t *understand* what a phone number is or what an electric bill is, they just know that such and such intention means they need to hit a certain API and pass it certain parameters.

Slotducks
Oct 16, 2008

Nobody puts Phil in a corner.


Good luck trying to program AI or computers to mimic that feeling of unease that we sometimes have that pans out to be right.

Admiral Ray
May 17, 2014

Proud Musk and Dogecoin fanboy

World War Mammories posted:

alright gently caress it I'm diving back in. good post.

if you can only know just one thing about neurology, "neurons that fire together wire together" (so-called "Hebbian learning") is probably the best choice. pretty much every property and activity of neural tissue derives from applying this concept over and over.

the layering of the visual system is incredibly interesting. the whole idea about processing happening at every point in the information stream is true. information processing even happens literally in your eyes. the way your retina is physically constructed is the rods/cones fire when a particular wavelength of light hits a chemical called retinal, derived from vitamin A, which makes a bond in it physically flip - giving the molecule a different shape, which I mean literally - that starts a whole cascade of things that results in the cone/rod sending a signal.

related asides:
- the actual thing that rods and cones respond to is darkness. in the dark they're firing action potentials; light makes them quiet down.
- rods (the ones you use for night vision) are sensitive enough that you can see a literal single photon.
- vertebrate eyes are backwards, or inside out. the light-detecting cells are all the way at the back of the retina. the wiring I'm about to ramble about is in front of your light detectors, which means your brain has to do a lot of passive editing to greenscreen them out. this is especially weird because other species have eyes that make sense - octopi, for example, have the light detecting cells on the back interior of their eyes, rather than back exterior like we do, and the wiring is recessed behind them. our eyes are kind of like building a solar panel and then piling all the wires and power cables on top of it. this is also extremely strong evidence for evolution in that it suggests sight evolved separately many times.

anyway, as I said, processing of the raw "I saw a photon!!!" data from each individual rod/cone starts right away. amacrine cells, retinal ganglion cells, bipolar cells, and other kinds of neurons make up that retina wiring in front of your rods/cones and do a lot of heavy lifting. in particular they start to detect "center-on" and "center-off": places where either one rod/cone saw a photon and its surrounding neighbors didn't, or one rod cone saw nothing but most/all of its surrounding neighbors. by associating adjacent center-on and center-off spots (fire together, wire together), you start getting simple detection of edges/lines in the visual field before you've even left the eyeball. then that information gets sent to the back of your brain (oh yeah, that's another thing: your visual cortex is almost as far away from your eyes while still being in the brain as it's possible to be) where further associative layers start picking out orientations, as in this classic result from hubel and wiesel -
https://www.youtube.com/watch?v=Cw5PKV9Rj3o
- neurons that fire when there's specifically an edge on a specific spot of the visual field that's facing a particular direction. this kind of stuff propagates through several layers of visual cortex to build more complex discrimination - say, an edge moving in a specific direction in some range of speed. beyond this we know less. but there's good evidence that at some point this information stream splits into two, the so-called "what and where pathways", in which different parts of your brain divvy up the jobs of identifying what the hell you're looking at and how not to bump into things. this is thought to explain the extremely weird ways that cognitive deficits can present. for example, there are people who have two perfectly working eyes who, because of a particularly located lesion or whatever, will be completely unable to get the concept of leftness. you'll ask them to trace a picture that they can look at without issue, but they'll only do the right half of it and go "okay, done." or a somewhat famous patient who lost a chunk of her motion/"where" pathway and saw the world as essentially a slideshow: given a still image, she could say "okay, that thing is about ten feet away, that other one is behind it, maybe five feet more," but put her on the street and she would be literally unable to tell you which way people were walking, whether things were approaching her, even if the cars on the drat road were in motion. and then all this stuff about which we only know vague bits feeds into the frontal lobe's decision-making parts, and someone who figures that out has more important inventions to make than loving cars.

so to tie it back to the topic: musk thinks it'll be a breeze to simulate all this poo poo with a couple cheap cameras and black box machine learning into which you pump unreliable data to get out infallible conclusions. give me a loving break.

yeah, the actual deficits we've observed are incredible. hemispatial neglect is, and always will be, one of the more wild deficits to me since it has multiple presentations. it not only can present as missing the entire left/right side of the world, but also can present as missing the left/right side of things in the world. it can also present as neglecting distant (beyond reach) objects. each deficit being dependent on particular lesion location, and even particular cells in those regions is mind boggling.

the complexity of these of deficits is why Musk's foray into neuroscience with his dumbshit implant is frustrating. hopefully he'll be among the first human test subjects for it so he doesn't go on to seriously harm others even more.

Plank Walker
Aug 11, 2005

YOLOsubmarine posted:

*words about natural language processing*

chat interfaces are like the worlds worst command line programs: the full range of available actions is never made known to the user, there are infinite redundant ways to invoke any specific action, repeatability is not guaranteed because the state of the chat receiver is obfuscated. on top of that, voice commands are even worse, since you have to wait for an audio reply from the bot to know what action it interpreted your command as. and then if the command was misinterpreted, you have no indication as to why

it's great that we can use ML to parse and classify natural language sentences, but using that as a primary user interface is a huge step backwards in user-friendliness

Hurt Whitey Maybe
Jun 26, 2008

I mean maybe not. Or maybe. Definitely don't kill anyone.

Colonel J posted:

do you have some kind of article on that?

I helped do the taxes for that a couple years ago, so it’s from personal experience. it’s not something they heavily publicize, because it’s somewhat controversial as an investment and tax strategy.

looking at their 10k, they have a small section on it. basically they set up a solar farm or install panels somewhere, taxable investors put some money into an entity to claim the tax credit on the solar installs (in this case, Tesla does a leasing arrangement on the panels), then the investors are allocated depreciation deductions and the tax credit.

I may have overstated how big of a chunk of their total revenue it is, but it’s one of the more profitable aspects of their operations. they get money from investors, and the investors get tax credits.

Gods_Butthole
Aug 9, 2020
Probation
Can't post for 8 years!

Plank Walker posted:

the more i hear about how the brain works, the more i think that tons of other existing equally or more complex systems probably are already exhibiting intelligent behavior that we just can't see

like, if intelligence can arise from a network of a few billion neural cells, what's keeping it from arising in a network of a few billion brains

That's basically where I'm at. We more or less know that consciousness arises out of a complex system of interlinked nodes that modulate, transform, and disperse various types of flows and intensities. Abstracted to that level you can map that description to all kinds of things that range from concrete (nervous systems) to abstract (global financial networks). If you wanted to get real absurd you could make an argument that any collection of interacting atoms meets that criteria. There could be some minimum threshold of system complexity, but then you have to contend with the fact that complexity of the system is dependent on where you draw the boundary for it, which can be arbitrary.

I dunno man, I get real galaxy brained real quick following this train of thought.

Adbot
ADBOT LOVES YOU

Karl Sharks
Feb 20, 2008

The Immortal Science of Sharksism-Fininism

Gods_Butthole posted:

I dunno man, I get real galaxy brained real quick following this train of thought.

god's butthole is getting too deep for me

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply