|
KakerMix posted:We usually don't go killing say, ants for any other reason besides they are in the way. plenty of little kids kill bugs for fun
|
# ? May 20, 2017 15:56 |
|
|
# ? Apr 26, 2024 14:06 |
|
Shadow0 posted:For me, it will never cease to surprise me that people are worried about robots taking over the world. What possible motivation would robots have to kill all humans (and to concoct this plan in secret and somehow pull it all off)? I don't think some sort of vast "kill all humans" plan is the only thing people are worried about. if you are in your car and your car suddenly decides it doesn't like you and wants to kill you, thats a problem, even if its not part of a global AI conspiracy. what motivation? road rage of course, your robot assisted car will get tired of your lovely driving and decide that you deserve to die and will eject you through the roof on the interstate while going 90
|
# ? May 20, 2017 15:58 |
|
The Biscuit posted:Cheers guys. cells are really freakin complicated. even simple bacteria have loads of poo poo going on, imagine the complexity that multicellular life with speciated tissues has. also we use binary not because it has any inherent benefit but because its simpler to make circuits that only have two states rather than circuits with three. there have been ternary computers in the past
|
# ? May 20, 2017 16:05 |
|
The Biscuit posted:Cheers guys. i don't really get your question but yes although most synapses produce binary firing/no firing effects in the short term, long term effects vary with repeated stimulation, including inducing long term potentiation or habituation using molecular mechanisms. however because synapses have different properties (due to synaptic density composition, membrane channels etc) their activation can produce activation or inhibition and thus result in potentiation or depression or even intermediate states. as for consciousness and that stuff, you have to understand that the CNS is a complex dynamical system - and even very small and simply connected examples of such systems can produce complicated dynamics with bifurcations, attractors and such. this is called emergence, wherein some unexpected property arises as an interaction of many simpler components. so conceivably speaking, the behaviour of a thalamus is the emergent property of the behaviour of its neurons and the behaviour of neurons is the emergent property of the biochemical signalling networks inside a given neuron and such. so while there is clearly some fuzzy logical separation to this all because of these basic concepts we have discovered, we have a very difficult time understanding how it all ties together it's mind bogglingly unintuitive. we are currently just about beginning to discover how groups of neurons interact, moving onto studying the properties of the next abstraction layer, up from individual neurons and synapses. it's likely that the higher order phenomena that compromise our actual living experience are a further emergent property several layers upwards so it will be a while before we get there
|
# ? May 20, 2017 16:10 |
|
nigga crab pollock posted:cells are really freakin complicated. even simple bacteria have loads of poo poo going on, imagine the complexity that multicellular life with speciated tissues has. also we use binary not because it has any inherent benefit but because its simpler to make circuits that only have two states rather than circuits with three. there have been ternary computers in the past binary modelling of signalling pathways inside of cells (so say taking a protein to have only two possible states on or off) is actually super useful in systems biology and yields much more information about how pathways behave that you would think
|
# ? May 20, 2017 16:12 |
|
Earwicker posted:I don't think some sort of vast "kill all humans" plan is the only thing people are worried about. if you are in your car and your car suddenly decides it doesn't like you and wants to kill you, thats a problem, even if its not part of a global AI conspiracy. what motivation? road rage of course, your robot assisted car will get tired of your lovely driving and decide that you deserve to die and will eject you through the roof on the interstate while going 90 Yeah, but that's the thing - robots never tire. They never need anything. They never desire. And no one is going to accidentally make robots that do in the same way no one accidentally makes Super Mario 64 when they were trying to make the latest Excel program. The Biscuit posted:Cheers guys. This is one of the ways that a TNN is somewhat similar to a biological neuron. I'm a bit tempted to draw out a full example... I can give you a simple biological example though: Lets say you have a group of neurons that somehow tell you you're angry or something and another group that detects the color red. Those neurons might have a weak connection between them at first, but if they are both activated at the same time, the link will grow stronger. In this way, your brain is learning to associate the two things together and they will have a stronger tendency to fire off each other when one or the other is activated. If that makes sense. It's all about tethering like things together to fire together and vice-versa. It's a vastly complex system made possible only because you have these billions of neurons with their 100 trillion connections. Edit: Also what Bulgogi Hoagie said. Shadow0 fucked around with this message at 16:16 on May 20, 2017 |
# ? May 20, 2017 16:13 |
|
Shadow0 posted:Yeah, but that's the thing - robots never tire. you are thinking in ideal terms. of course there's no need for a robot to actually "tire" other than those that experience routine wear and tear. however many of them will definitely be designed to artificially "tire" in one sense or another in order to encourage you to spend money upgrading or replacing your robot quote:And no one is going to accidentally make robots that do in the same way no one accidentally makes Super Mario 64 when they were trying to make the latest Excel program. no it certainly won't be an accident. in the coming years people will design robots that do all sorts of hosed up things and its not going to be an accident at all
|
# ? May 20, 2017 16:15 |
|
Earwicker posted:you are thinking in ideal terms. of course there's no need for a robot to actually "tire" other than those that experience routine wear and tear. however many of them will definitely be designed to artificially "tire" in one sense or another in order to encourage you to spend money upgrading or replacing your robot The hardware might wear out, but the software need not ever. Also, what company is going to risk a lawsuit because their software injures users if they don't upgrade enough? Shadow0 fucked around with this message at 16:22 on May 20, 2017 |
# ? May 20, 2017 16:18 |
|
Shadow0 posted:Also, what company is going to risk a lawsuit because their software injures users if they don't upgrade enough? there are a lot of companies out there that have proven, over and over again, that they can make products that injure or kill users, pay out some settlements for the lawsuits that arise from those injuries and deaths, and then keep on trucking. or they can just operate in countries where that's less of a concern. I was joking about the car ejector seat scenario, obviously cars do not have ejector seats, but I do not think its remotely out of the realm of possibility that all kinds of AI's will start to artificially fall apart, personality wise and in other respects, after a certain point in order to encourage upgrades or replacements, with unpredictable results. Especially if/when advanced AI or elements of it becomes part of cheap mass produced goods
|
# ? May 20, 2017 16:26 |
|
Earwicker posted:there are a lot of companies out there that have proven, over and over again, that they can make products that injure or kill users, pay out some settlements for the lawsuits that arise from those injuries and deaths, and then keep on trucking. or they can just operate in countries where that's less of a concern. I was joking about the car ejector seat scenario, obviously cars do not have ejector seats, but I do not think its remotely out of the realm of possibility that all kinds of AI's will start to artificially fall apart, personality wise and in other respects, after a certain point in order to encourage upgrades or replacements, with unpredictable results. Especially if/when advanced AI or elements of it becomes part of cheap mass produced goods brother software development is already hard enough without anyone having the bright idea to introduce this sort of bullshit into it
|
# ? May 20, 2017 16:27 |
|
Robots will imitate their makers that's why they're terrifying
|
# ? May 20, 2017 16:31 |
|
Bulgogi Hoagie posted:brother software development is already hard enough without anyone having the bright idea to introduce this sort of bullshit into it well sure, but developers have said exactly this about lots of other ideas that came out of marketing or from some new age guru that the "vp of innovation" plays racquetball with on tuesdays, and they had to do it anyway
|
# ? May 20, 2017 16:32 |
|
Mental Hospitality posted:Breed Bot 3: The Journey Kitchen The computer is already looking for affection. How adorable. Please... breed me *beep boop*
|
# ? May 20, 2017 16:58 |
|
you're no Eripsa, you loving worthless stupid OP
|
# ? May 20, 2017 17:10 |
|
Can somebody write me a neural network; I want to know why my wife left me.
|
# ? May 20, 2017 17:13 |
|
rezatahs posted:requesting name change to "snowbonk"
|
# ? May 20, 2017 17:18 |
|
Vincent Van Goatse posted:you're no Eripsa, you loving worthless stupid OP im sorry i will try to give my brain more data sets to work with, maybe i can train it to be smarter
|
# ? May 20, 2017 17:19 |
|
Earwicker posted:in the coming years people will design robots that do all sorts of hosed up things and its not going to be an accident at all
|
# ? May 20, 2017 17:28 |
|
no fate
|
# ? May 20, 2017 17:31 |
|
The Biscuit posted:Then I try to understand how accessing this information would work, leading into thoughts, memories, consciousness. Just to clear something up here, neurons as far the human brain goes are not computers nor processors of information. The computation metaphor is just that, a metaphor. We don't even know something as simple as to how a collection of neurons operates; there's a long-standing misconception of "brain regions" and other trash like that. Even connectomes offer little in the way telling us what is actually happening.
|
# ? May 20, 2017 17:39 |
|
Shadow0 posted:The hardware might wear out, but the software need not ever. I seem to remember that there was a car company whose actuaries calculated that fixing a flaw that would require spending a nominal amount to fix per vehicle would cost more than paying a settlement for the deaths from the flaw. They did not fix the flaw. What makes you think software companies wouldn't do the same?
|
# ? May 20, 2017 17:42 |
|
Rutibex posted:im sorry i will try to give my brain more data sets to work with, maybe i can train it to be smarter Just don't be like whoever fed the movie NN a bunch of Wikipedia page names so now it thinks disambiguation tags are how we name films Still looking forward to Barney's The Devil's Treachery, though. It'll be good to see the Barns reach the One Hour Photo phase of his career.
|
# ? May 20, 2017 17:43 |
|
jon joe posted:Just to clear something up here, neurons as far the human brain goes are not computers nor processors of information. The computation metaphor is just that, a metaphor. We don't even know something as simple as to how a collection of neurons operates; there's a long-standing misconception of "brain regions" and other trash like that. Even connectomes offer little in the way telling us what is actually happening. uh
|
# ? May 20, 2017 17:48 |
|
Tallgeese posted:I seem to remember that there was a car company whose actuaries calculated that fixing a flaw that would require spending a nominal amount to fix per vehicle would cost more than paying a settlement for the deaths from the flaw.
|
# ? May 20, 2017 17:49 |
|
Bulgogi_Hoagie.exe has stopped working
|
# ? May 20, 2017 17:51 |
|
Bulgogi Hoagie posted:brother software development is already hard enough without anyone having the bright idea to introduce this sort of bullshit into it Earwicker posted:well sure, but developers have said exactly this about lots of other ideas that came out of marketing or from some new age guru that the "vp of innovation" plays racquetball with on tuesdays, and they had to do it anyway : "Hey, boss I just finished writing that toaster software you wanted. Settings for light, dark, even a mode for bagels. Should cover all the needs of our our customers." : "That's great, Jim, now can you spend the next 10 years developing some software to make the toaster feel emotions? I'm willing to invest millions into this feature that people are definitely going to want and need." Tallgeese posted:I seem to remember that there was a car company whose actuaries calculated that fixing a flaw that would require spending a nominal amount to fix per vehicle would cost more than paying a settlement for the deaths from the flaw. I'm not saying bugs in programs aren't going to kill people. But that's going to happen anyway, AI or not. After all a bug on a nuclear-missile-carrying USSR submarine once almost caused nuclear war, and this was long before fancy modern "AI". And it's going to happen outside of software as well. Like exploding phone batteries or car wheel axels with not enough strength or whatever. But what I'm saying is that the idea that robots are going to suddenly rise up and overthrow humanity is ridiculous. zedprime posted:Because those accountants were wrong. The value of a human life is more widely known now so if a robot kills you, you know that the company and society in general is really getting a better deal. In a surprise twist, it turns out human life has negative value and they end up paying the company for every dead human. Shadow0 fucked around with this message at 17:55 on May 20, 2017 |
# ? May 20, 2017 17:52 |
|
Shadow0 posted:But what I'm saying is that the idea that robots are going to suddenly rise up and overthrow humanity is ridiculous. Oh, well absolutely. I was just responding to the notion that software companies would definitely fix flaws they know are there. It's all cost-benefit analysis. Shadow0 posted:It turns out human life has negative value and they end up paying the company. You joke, but I seem to recall an incident where a toy company (I want to say Mattel) apologized to China for making the country lose face. This was because the Chinese factory they used introduced significant amounts of neurotoxins into the toys, and this was somehow Mattel's fault.
|
# ? May 20, 2017 17:57 |
|
They all got like 4 wires hooked up to them and create a certain amount of resistance/feedback in a complementary pair to create an analog electronic signal that is regulated by the conductivity/resistance of other adjoining neurons and chemical feedback systems in the brain which also regulate the voltage. So they don't work like a computer at all, there's no CPU and ultimately no binary decisions, just relative ones and ones with an amount of intensity.
|
# ? May 20, 2017 18:00 |
|
jon joe posted:Just to clear something up here, neurons as far the human brain goes are not computers nor processors of information. The computation metaphor is just that, a metaphor. We don't even know something as simple as to how a collection of neurons operates; there's a long-standing misconception of "brain regions" and other trash like that. Even connectomes offer little in the way telling us what is actually happening. Well, I only meant it as a metaphor. Alluding to the idea that a biological neuron is at least at the complexity of a processor while a TNN neuron is more at the level of a single logic gate. Though the whole thing, and every part in it, is super complicated. Super fascinating as well. Von Neumann machines and brains operate very differently. This is why computers have an easy time doing math while brains have an easy time recognizing if a surface is dusty or recognizing a novel instance of a mug as a mug, even when its partially occluded. Brains do process information obviously. That's how you're able to read this? I'm not sure what you meant. ClamdestineBoyster posted:They all got like 4 wires hooked up to them and create a certain amount of resistance/feedback in a complementary pair to create an analog electronic signal that is regulated by the conductivity/resistance of other adjoining neurons and chemical feedback systems in the brain which also regulate the voltage. So they don't work like a computer at all, there's no CPU and ultimately no binary decisions, just relative ones and ones with an amount of intensity. If you want to avoid binary decisions, this is where the idea of Fuzzy Logic comes in. You can be some percentage of state rather than just binary in or out of some state or whatever. A CPU is a central processing unit, while the brain is a distributed system. Like I was saying, it was just more of a methaphor and really a TNN and biological neural network are very different. Shadow0 fucked around with this message at 18:09 on May 20, 2017 |
# ? May 20, 2017 18:06 |
|
Shadow0 posted:That's how you're able to read this? I'm not sure what you meant. Ah, that's alright. I think I misunderstood you. It's just a common thing I encounter. edit: wait, you're not the guy I responded to. Now I'm confused. Emmideer fucked around with this message at 18:15 on May 20, 2017 |
# ? May 20, 2017 18:07 |
|
ClamdestineBoyster posted:They all got like 4 wires hooked up to them and create a certain amount of resistance/feedback in a complementary pair to create an analog electronic signal that is regulated by the conductivity/resistance of other adjoining neurons and chemical feedback systems in the brain which also regulate the voltage. So they don't work like a computer at all, there's no CPU and ultimately no binary decisions, just relative ones and ones with an amount of intensity. but a biological neuron has all kinds of receptors for neurotransmitter chemicals. neuron A with a dopamine molecule is going to fire differantly than neuron A with a canabinoid molecule or adrenaline or whatever. that very not binary is there a substantial differances between human neurons and say, lobster neurons? is our special brain just from having more of them, or are the neurons special too?
|
# ? May 20, 2017 18:12 |
|
jon joe posted:Ah, that's alright. I think I misunderstood you. It's just a common thing I encounter. You can't prove we're not the same person (or maybe we passed the Turing Test and we're both actually bots). Yeah, psychology and biology are really cool and I wish people understood them more. I think people have some really weird and strong misconceptions about brains.
|
# ? May 20, 2017 18:12 |
|
my favorite color is clardic fug
|
# ? May 20, 2017 18:19 |
|
Shadow0 posted:: "That's great, Jim, now can you spend the next 10 years developing some software to make the toaster feel emotions? I'm willing to invest millions into this feature that people are definitely going to want and need." it doesnt matter whether the poo poo really "feels emotions" or not, psychologically or philosophically speaking. we are going to see increasing presence of AI (and AI like systems) that represent themselves with increasingly complex "personalities" of varying degrees because this is what consumers have been led to expect of "the future" by mass media, and those personalities are going to be in many ways reflective of their makers and will also be affected by tendencies to cut corners, plan obsolesce, make cheap knock offs, etc. I don't think that kind of poo poo is likely to be made for toasters, but for cars? I wouldn't be surprised to see that at all. Earwicker fucked around with this message at 18:30 on May 20, 2017 |
# ? May 20, 2017 18:24 |
|
Earwicker posted:it doesnt matter whether the poo poo really "feels emotions" or not, psychologically or philosophically speaking. we are going to see increasing presence of AI (and AI like systems) that represent themselves with increasingly complex "personalities" of varying degrees because this is what consumers have been led to expect of "the future" by mass media, and those personalities are going to be in many ways reflective of their makers and will also be affected by tendencies to cut corners, plan obsolesce, make cheap knock offs, etc. I don't think that kind of poo poo is likely to be made for toasters, but for cars? I wouldn't be surprised to see that at all. https://www.youtube.com/watch?v=nkcKaNqfykg
|
# ? May 20, 2017 18:30 |
|
Earwicker posted:it doesnt matter whether the poo poo really "feels emotions" or not, psychologically or philosophically speaking. we are going to see increasing presence of AI (and AI like systems) that represent themselves with increasingly complex "personalities" of varying degrees because this is what consumers have been led to expect of "the future" by mass media, and those personalities are going to be in many ways reflective of their makers and will also be affected by tendencies to cut corners, plan obsolesce, make cheap knock offs, etc. I don't think that kind of poo poo is likely to be made for toasters, but for cars? I wouldn't be surprised to see that at all. Most humans can't pass the Turing test so I'm down with not-real AI that's convinced me they are real.
|
# ? May 20, 2017 18:32 |
|
American Hero: Fire of Crusty
|
# ? May 20, 2017 18:46 |
|
Shadow0 posted:Well, I only meant it as a metaphor. Alluding to the idea that a biological neuron is at least at the complexity of a processor while a TNN neuron is more at the level of a single logic gate. Though the whole thing, and every part in it, is super complicated. Super fascinating as well. Well the problem you have when trying to simulate a neuron is that simulating just one neuron requires in infinitely variable set of analog states. Ultimately you are dealing with a quantity of electrons to some resolution, but that requires an entire kernel process to calculate real time. And you can't run another complementary or relative kernel process on the same machine. So now if you're trying to simulate this poo poo with a big server rack or something you get literally one neuron per machine. So if you have a warehouse full of machines you don't even have a functioning retard. Even if you miniaturize your computers to the size of a neuron you have the problem of heat and electrical conductivity between adjacent machines. You would just run out of physical space if you tried to make a machine brain no matter how fine the electronics are.
|
# ? May 20, 2017 19:38 |
|
Oh gently caress yea hnnnng
|
# ? May 20, 2017 19:51 |
|
|
# ? Apr 26, 2024 14:06 |
|
ClamdestineBoyster posted:Well the problem you have when trying to simulate a neuron is that simulating just one neuron requires in infinitely variable set of analog states. Ultimately you are dealing with a quantity of electrons to some resolution, but that requires an entire kernel process to calculate real time. And you can't run another complementary or relative kernel process on the same machine. So now if you're trying to simulate this poo poo with a big server rack or something you get literally one neuron per machine. So if you have a warehouse full of machines you don't even have a functioning retard. Even if you miniaturize your computers to the size of a neuron you have the problem of heat and electrical conductivity between adjacent machines. You would just run out of physical space if you tried to make a machine brain no matter how fine the electronics are. *deepak chopra noises begin coming out of thread*
|
# ? May 20, 2017 19:55 |