Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
joke_explainer


HighwireAct posted:

a robo-trolley that kills the greater number of people regardless of whether or not the lever was pulled

The superman problem wasn't something I expected to come into real applications so quickly in my lifetime, but here we are, having to choose how to program machines that will hold lives in their hands and have to deal with catastrophic events where people will die. Programmers who don't minimize loss of life seem like their companies could be liable for the damages their machines do, even if the machines hurt less people than a human in the same situation might have: The programming and decision making is clearly to blame, not the individual at the switch who could not have been expected to make a rational decision in a short span of time. Sensors putting out and receiving signals hundreds of times a second and decision making that can complete lookups and deep study of things before a synapse can even finish firing are a reality of the world we live in.

So the topic of a machine having to account for the behavior of other machines is very interesting. What data will they try to gather before they make their decision? Will the relative sociopathy of the programming of a machine factor heavily into the decision making of the robot? It seems like they run into very difficult race conditions very easily here. If your decision-making computer is certain the other computer will make a decision that will minimize loss of life no matter what, it would assume it would veer out of the way even if it meant sacrificing its own occupants, so it logically shouldn't veer in order to keep the greatest number alive. But if the other computer knows the first machine is also programmed the same way, it would assume it would veer first. Are they allowed to talk to each other? Roll the dice? If both of them assume the other will veer, they'll just careen into each other. Very strange times.

Adbot
ADBOT LOVES YOU

joke_explainer


google THIS posted:

A robot that explains jokes :evilbuddy:

:negative:

joke_explainer


A predator drone, but instead of dropping of hellfire missiles, it cargo-drops crates of AK-47s that have tags that say 'DO NOT USE FOR TERRORISM' on them every time

joke_explainer


A stealthy robot which picks locks and rolls into people's kitchens in the middle of the night to open their fridges and leave.

  • Locked thread