Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
cheese
Jan 7, 2004

Shop around for doctors! Always fucking shop for doctors. Doctors are stupid assholes. And they get by because people are cowed by their mystical bullshit quality of being able to maintain a 3.0 GPA at some Guatemalan medical college for 3 semesters. Find one that makes sense.

FADEtoBLACK posted:

So that's it? The US needs to invest in fighter tech for the sake of economic export?
We sure are going to have to sell a lot of these F-35's to Ethiopia to recoup our costs - gotta respect the free market baby.

Adbot
ADBOT LOVES YOU

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

Paul MaudDib posted:

The main engagement strategy for an aircraft like an F-22 is "fly towards enemy, at 80 miles out shoot long-range missiles before we show up on their radar" which is not exactly a complicated mission to automate.

Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong?

FADEtoBLACK
Jan 26, 2007

MeramJert posted:

Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong?

He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO.

Which is programmable when you out range and out sense everything and can also respond faster than a human being.

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

FADEtoBLACK posted:

He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO.

Did you watch the video? That guy was claiming this doesn't happen ever, not just that the F35 isn't capable of it.

FADEtoBLACK
Jan 26, 2007
I'm not sure if I did, was it the guy who was involved with the F16 talking about the 22/35? I do trust him but you have to remember the range of the engagement matters on what your sensor and weapons package is, not on what plane your flying. There are articles talking about how they have flight simulations of weapons systems that are still being worked on. Look, I agree with you on doubting they can do this right now but this is more of a 'feature creep' thing where every future aircraft will be steadily increasing engagement range and weapon capability. The F22 is an optimal but horribly expensive weapons platform and the F35 exists because the dream project for Lockheed is a fighter built in every state whose development is always ongoing and never ends and they have sole contract rights on maintenance and production. This is what happens when you give the military industrial complex 'something to do'

FADEtoBLACK fucked around with this message at 08:07 on Jul 6, 2014

Spaceman Future!
Feb 9, 2007

MeramJert posted:

Did you watch the video? That guy was claiming this doesn't happen ever, not just that the F35 isn't capable of it.

That really depends on the engagement and location. The issue has never been "can we blow it up from 100 miles away", it has always been "are were totally sure that's a Mig and not one of our blackhawks". As automation of mission logistics improves the chances that youre mis-identifying a friendly at that distance are significantly lower, and with the automation of warfare in most cases now if you are incorrect you will more than likely end up accidentally blowing up one of your own predators rather than a manned plane.

Of course, the only foolproof way to fix that long range conundrum is to get all these manned fighters in mothballs and use strictly unmanned tech, which is the smart thing to do anyway since any unmanned tech is going to be controlled from a central source and will be able to identify eachothers positions easily because all the commands and controls can be centrally audited in real time. Plus, air superiority is inherent to the predator drone simply by default of cost. For the cost of a single F-35 you can field roughly 40 predators, if all 40 engage a single air target you win even if a few get shot down by the "superior" platform.

Why the hell are we spending money on this horseshit again? Cant we just give money to workers in manufacturing districts instead? It seems it would be cheaper just to cover their salaries and we would save money on building materials, facilities and all that other poo poo. Everybody wins.

Spaceman Future! fucked around with this message at 17:56 on Jul 6, 2014

karthun
Nov 16, 2006

I forgot to post my food for USPOL Thanksgiving but that's okay too!

He is probably talking about Pierre Sprey. In response, the tennis court is 104-0.

Pimpmust
Oct 1, 2008

Paul MaudDib posted:

The F-35 is just a piece of designed-by-commitee poo poo. So far as I can see it's the expensive modern-day equivalent of the F-4. It's designed to fit everyone's needs and now it sucks at most of its roles. Nevertheless, we lack any real competition in the air-power world, so it'll probably do just fine. It's not going to realistically come up against cutting-edge Russian interceptors so far as I can see.

Actually it's the modern-day equivalent of the F-105 Thunderchief :allears:



quote:

F-105
General characteristics

Crew: 1 (2 for F-105C/E/F/G variants)
Payload: 14,000 lb (6,700 kg) of weapons
Length: 64 ft 4.75 in (19.63 m)
Wingspan: 34 ft 11.25 in (10.65 m)
Height: 19 ft 8 in (5.99 m)
Wing area: 385 ft² (35.76 m²)
Empty weight: 27,500 lb (12,470 kg)
Loaded weight: 35,637 lb (16,165 kg)
Max. takeoff weight: 52,546 lb (23,834 kg)

Performance

Maximum speed: Mach 2.08 (1,372 mph, 2,208 km/h) at 36,000 ft (11,000 m)
Combat radius: 780 mi (680 nmi, 1,250 km)
Ferry range: 2,210 mi (1,920 nmi, 3,550 km)
Service ceiling: 48,500 ft (14,800 m)
Rate of climb: 38,500 ft/min (195 m/s)
Wing loading: 93 lb/ft² (452 kg/m²)
Thrust/weight: 0.74
Lift-to-drag ratio: 10.4
Time to altitude: 1.7 min to 35,000 ft (11,000 m)

quote:

F-35
General characteristics

Crew: 1
Payload: 15,000 lb (6,800 kg) of weapons
Length: 51.4 ft (15.67 m)
Wingspan: 35 ft[N 5] (10.7 m)
Height: 14.2 ft[N 6] (4.33 m)
Wing area: 460 ft²[250] (42.7 m²)
Empty weight: 29,300 lb (13,300 kg)
Loaded weight: 49,540 lb[232][N 7][529] (22,470 kg)
Max. takeoff weight: 70,000 lb[N 8] (31,800 kg)

Performance

Maximum speed: Mach 1.6+[254] (1,200 mph, 1,930 km/h) (tested to Mach 1.61)[372]
Range: 1,200 nmi (2,220 km) on internal fuel
Combat radius: 584 nmi[532] (1,080 km) on internal fuel[533]
Service ceiling: 60,000 ft (18,288 m)
Rate of climb: classified (not publicly available)
Wing loading: 107.7 lb/ft² (526 kg/m²; 745 kg/m² max loaded)
Thrust/weight:
With full fuel: 0.87
With 50% fuel: 1.07

Okay I lied, it's heavier, slower and got higher wing-loading.

Pimpmust fucked around with this message at 09:18 on Jul 6, 2014

Cat Mattress
Jul 14, 2012

by Cyrano4747

hobbesmaster posted:

Eritean-Ethiopian war had dogfights between jets in 2002.

But Africa doesn't count I assume.

Eritrea and Ethiopia cannot afford F-35s, though. They probably couldn't afford to operate them even if you gave them for free. $32K per flight hour is a lot when your country's GDP is barely $4 billion (as is the case for Eritrea). There's a reason countries in Africa and South America choose the Gripen, and it's not because of its incredible performances.

Even the Swiss, who aren't exactly short on money, chose the Gripen for its low operating cost, their Air Force would have preferred the Rafale but they went with the cheaper option and then the population voted not to buy anything after all. Which makes sense because they're not even using their air force for anything, so what's the point?

FADEtoBLACK posted:

He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO.

The thing is that it's generally much cheaper to retrofit superior sensors on a tried and true airframe.

Stairmaster
Jun 8, 2012

Barlow posted:

Even if we lived in some alternate universe where air-to-air superiority is worth spending more than the GDP of the 16th wealthiest nation on earth it's irrelevant in this case. We aren't talking about the F-22 here, which is an insanely expensive but working plane that does air combat well and will for the foreseeable future, we're talking the F-35. Did we really need another plane that would win an air war against MIGs?

I think the f-22 is actually less expensive than F-35 at this point.

FADEtoBLACK
Jan 26, 2007
I really wasn't attempting to defend the f22. Any new manned aircraft that doesn't haul a large amount of humans doesn't need to be manned anymore and all the existing stuff is going to be good enough and still is in production. I'm pretty sure the only reason the f22 and the f35 exist is because of how isolated the rich are and think that anything promised will be delivered and will be worth it and if it isn't then oh well it's American.

Main Paineframe
Oct 27, 2010

MeramJert posted:

Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong?

One needs to distinguish between capability, theory, practicality, and reality here. In theory, the F22's main strategy is long-distance beyond-visual-range engagements with missiles, and it was built with the intention of having that capability. In reality, though, there aren't too many instances of that actually happening in real combat, because the rules of engagement don't usually permit that for practical reasons. It's what the F22 is supposedly supposed to do, but it's also a tech-fetish dream that doesn't actually happen on the battlefield. That's my understanding, anyway.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Main Paineframe posted:

One needs to distinguish between capability, theory, practicality, and reality here. In theory, the F22's main strategy is long-distance beyond-visual-range engagements with missiles, and it was built with the intention of having that capability. In reality, though, there aren't too many instances of that actually happening in real combat, because the rules of engagement don't usually permit that for practical reasons. It's what the F22 is supposedly supposed to do, but it's also a tech-fetish dream that doesn't actually happen on the battlefield. That's my understanding, anyway.

It's been that way since Vietnam, and every time guns and increasingly short range missiles ended up being more important than BVR stuff.

Slow News Day
Jul 4, 2007

DeusExMachinima posted:

If you have a relatively defined moveset, it is piss easy to create a learning heuristic without any advances in AI or computing whatsoever. If contact with the remote pilot is lost, the drone can take over and at least stand a chance. And if it gets shot down, all the other drones will know not to make the same mistake against that enemy(just like taking beads out of a box).

Talking about the line between truly autonomous drones and remotely piloted ones is way too black and white. Pilots will pilot when they can and the drones won't be helpless if they get jammed. Flip side is there's no reason to ever totally remove humans from the decision making process.

If the drone loses link with the remote pilot it will most likely be due to jamming gear used by the enemy, in which case it won't be able to share its "learnings" with the other drones either.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

enraged_camel posted:

If the drone loses link with the remote pilot it will most likely be due to jamming gear used by the enemy, in which case it won't be able to share its "learnings" with the other drones either.

This isn't Cylon raiders here, drones wouldn't be "learning" much on the battlefield. You would cook up one program - a good set of reward/danger heuristics, a set of possible moves, and an algorithm for evaluating the moves against the reward/danger heuristics for as many possible plies into the future - and freeze it for deployment. The "learning" is actually training the reward/danger heuristics (in a machine-learning sense) to properly value positioning, accomplishment of goals, etc. And machine learning is a VERY iterative process so you'd do this in a simulator.

Ideally you would be able to send back the drone autopilot's system inputs (radar view, etc) with the greatest possible detail and frequency to be able to better train the system in the future, but it's probably more realistic to store that on a disk rather than uplink it live, and it's not really essential to the training process.

You might be able to do some very simplistic live strategy refinements, like "my strategy X to counter enemy strategy Y in situation Z got me blown up" and immediately down-weight that choice for that situation, but overall the idea is to improve your reward/danger heuristic and figure out why that combination got you blown up (gave an enemy a clean shot, etc).

Or I guess the other way I can read your statement is as a generic "what happens when Link-16 gets jammed on a drone", in which case they won't be able to share tactical data and will have to do their best with their own sensors and IFF just like a pilot would.

Paul MaudDib fucked around with this message at 22:02 on Jul 6, 2014

shrike82
Jun 11, 2005

Yeah, I'd rather not have drones making decisions to kill autonomously.

Pimpmust
Oct 1, 2008

Yeah well Macross style Ghost drones are probably a weee bit longer off than most imagine. Considering the cost and all the bugs in the coding on the F-22/F-35, having a proper drone AI that doesn't poo poo the bed at even ideal circumstances isn't going to be either cheap or easy.

A little like the first two generations of AtA missiles (Hey, BVR is gonna be so awesome guys! *Missile tries to shoot down the sun*).


But hey, maybe in 50 years?

GROVER CURES HOUSE
Aug 26, 2007

Go on...

shrike82 posted:

Yeah, I'd rather not have drones making decisions to kill autonomously.

That's not being discussed here at all.

shrike82
Jun 11, 2005

Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes.

Anyway, I find it funny that people are still going wow the power of algorithms will do magic. I'm picturing Paul MuadDib posting a decade ago about data mining and how the NSA would be able to safely trawl through information to save us from the terrorists.

shrike82 fucked around with this message at 22:30 on Jul 6, 2014

Mc Do Well
Aug 2, 2008

by FactsAreUseless

shrike82 posted:

Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes.

Skynet will be an improvement over existing human governments.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes.

Autonomous weapons control already exists, it's called a "missile". Close-in-weapons-systems go all the way to controlling the initial firing/launch. On the ground, there's systems like the Sampson Remote Control Weapons Station, the current implementation still asks the operator to confirm before it fires but there's no reason it couldn't operate on automatic to create a killzone.

We already essentially rely on computers to select targets for us, there's simply no way for pilots to externally verify the target in many engagement situations. And they haven't showed any hesitancy about pulling the trigger when the targeting is ambiguous (see: Collateral Murder). Functionally I don't see a difference, a doctrine of positive identification or shoot-first ask-later is what matters, not whether it's implemented by a computer or by a person in a pilot's seat. The traditional argument from drone operators is that being halfway around the world gives them the physical safety and emotional distance to make rational choices, that argument applies equally well to computer control. Particularly since until the control links get jammed the operators will be the ones operating the drones.

I don't really like it as such, but I think it's totally unavoidable, weapons systems have become nothing but more autonomous as time goes on. And once someone does it, no one will want to be stuck pitting big manned fighters against small maneuverable drone fighters.

shrike82 posted:

Anyway, I find it funny that people are still going wow the power of algorithms will do magic. I'm picturing Paul MuadDib posting a decade ago about data mining and how the NSA would be able to safely trawl through information to save us from the terrorists.

It's not really "the power of algorithms", game-playing algorithms have been around since forever, it's how far portable supercomputing has come in the last 5-10 years. It's now totally reasonable to envision a small supercomputer that could be put into a drone, with enough power to reasonably handle real-time tactical control of an aircraft.

20 years ago it would have been ludicrous to expect a computer to be able to handle high-level natural-language processing in combination with data searching, and now IBM has a machine that can play Jeopardy. Yeah, it had a rough start, it still has its flaws, and I expect the first-gen fighter drones would too.

Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick.

Paul MaudDib fucked around with this message at 23:31 on Jul 6, 2014

shrike82
Jun 11, 2005

If you have trouble telling the difference between a missile and an autonomous drone then I'm not sure what can I do to help you.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

If you have trouble telling the difference between a missile and an autonomous drone then I'm not sure what can I do to help you.

So systems like Aegis or Phalanx don't make autonomous target-and-kill decisions on their own, is that what you're claiming? Because that's just wrong.

quote:

The basis of the system is the 20 mm M61 Vulcan Gatling gun autocannon, used since the 1960s by the United States military in nearly all fighter aircraft (and one land mounting, the M163 VADS), linked to a Ku-band radar system for acquiring and tracking targets. This proven system was combined with a purpose-made mounting, capable of fast elevation and traverse speeds, to track incoming targets. An entirely self-contained unit, the mounting houses the gun, an automated fire control system and all other major components, enabling it to automatically search for, detect, track, engage, and confirm kills using its computer-controlled radar system.
http://en.wikipedia.org/wiki/Phalanx_CIWS#Operation

quote:

Goalkeeper is a Dutch close-in weapon system (CIWS) introduced in 1979 and in use as of 2014. It is an autonomous and completely automatic weapon system for short-range defense of ships against highly maneuverable missiles, aircraft and fast maneuvering surface vessels. Once activated the system automatically performs the entire process from surveillance and detection to destruction, including selection of the next priority target.
http://en.wikipedia.org/wiki/Goalkeeper_CIWS

Like it or not, we're already at the point where computers are selecting targets and pulling the trigger themselves.

I think there's an argument to be made about the degree of supervision we exercise on killer computers, but I think that has more to do with the relative infancy of the technology at present. For the most part, generation-1 fighter drones would be controlled from a command link just like a Predator or a CIWS system, and as the technology gets more mature the autopilot will be supervised and directly controlled less and less.

You can see the same "supervision creep" in missiles. Now that the technology is mature, we have missiles (eg AIM-9X) that we launch at a target without positive lock (for example, from an internal bay, or at targets that are substantially off-bore), and we trust that the missile will perform complex maneuvers and then point its seeker at the target the pilot commanded. In that case the pilot is still pulling the trigger, but we're requiring a substantial amount of "intelligent" behavior from the missile.

At the end of the day, until we have Skynet there will always be a human involved at some level - choosing when to enable the CIWS/combat autopilot, planning the mission, programming the autopilots, etc. We can talk ourselves into being content with whatever scrap of mental cover we give ourselves. As our comfort level increases the human involvement will just decrease, that's all.

Paul MaudDib fucked around with this message at 23:53 on Jul 6, 2014

shrike82
Jun 11, 2005

I'm not a military expert and I suspect you aren't either given that you're relying on Google to bring up the aegis. It looks like there's still human control of the platform given that it shot down an Iranian jetliner.

Cat Mattress
Jul 14, 2012

by Cyrano4747
Close-in weapon systems need to be entirely automated because they need to open fire on the threat as soon as it's detected. Each fraction of a second counts.

Wikipedia posted:

This also makes the timeframe for interception relatively short; for supersonic missiles moving at 1500 m/s it is approximately one-third of a second.
There is no time to insert human oversight in there.

The up-side is that it's a defense system placed on specific military installations (warships) and programmed to attack things that look like missiles. That's a big difference with, say, some ED-209 police bot programmed to shoot criminals (criminals being identified as human beings not in police uniforms).

namesake
Jun 19, 2006

"When I was a girl, around 12 or 13, I had a fantasy that I'd grow up to marry Captain Scarlet, but he'd be busy fighting the Mysterons so I'd cuckold him with the sexiest people I could think of - Nigel Mansell, Pat Sharp and Mr. Blobby."

Yes, so the argument against completely automating a weapons unit has already been defeated based upon the need to target quickly, and so the argument for what is needed will advance so long as it is technically possible. Sensible people with concerns about completely automating attack weaponry will lose against the evidence that humans are more than capable of attacking friendlies and innocents and full automation will improve total strike area coverage and such.

Prav
Oct 29, 2011

There's going to be a lot of noise the first time an autonomous drone shoots down an airliner.

Rent-A-Cop
Oct 15, 2004

I posted my food for USPOL Thanksgiving!

namesake posted:

Yes, so the argument against completely automating a weapons unit has already been defeated based upon the need to target quickly, and so the argument for what is needed will advance so long as it is technically possible. Sensible people with concerns about completely automating attack weaponry will lose against the evidence that humans are more than capable of attacking friendlies and innocents and full automation will improve total strike area coverage and such.
There's a reason automated systems are only trusted in tasks where their target cannot possibly be anything but a target. It isn't like 737s routinely fly around at mach 3 a foot above the water. Bombers and airliners look remarkably similar to a computer though.

namesake
Jun 19, 2006

"When I was a girl, around 12 or 13, I had a fantasy that I'd grow up to marry Captain Scarlet, but he'd be busy fighting the Mysterons so I'd cuckold him with the sexiest people I could think of - Nigel Mansell, Pat Sharp and Mr. Blobby."

Rent-A-Cop posted:

There's a reason automated systems are only trusted in tasks where their target cannot possibly be anything but a target. It isn't like 737s routinely fly around at mach 3 a foot above the water. Bombers and airliners look remarkably similar to a computer though.

Honest question though: how much civilian traffic do you get during an aerial war combat zone? I imagine anyone with any choice would be steering clear, and if you can program in flight routes then telling drones to stay out of lines of attack from those civilian routes and likewise ignore planes which are sticking to those routes is the start of making the idea acceptable to the masses.

Don't get me wrong, autonomous killing platforms is an awful idea but it is going to be argued for strongly.

KomradeX
Oct 29, 2011

namesake posted:

Honest question though: how much civilian traffic do you get during an aerial war combat zone?

Let's ask Iran

hobbesmaster
Jan 28, 2008

KomradeX posted:

Let's ask Iran

Reminder: The US shot down an Iranian civilian airliner on a daily scheduled route that was in contact with the appropriate ATC and on a common airway. Instead of apologizing we told them to go gently caress themselves, the US never apologizes.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

namesake posted:

Don't get me wrong, autonomous killing platforms is an awful idea but it is going to be argued for strongly.

The problem is that we're dealing with a nebulous concept of "killer robots" rather than the incremental way technology advances in the real world. I've sort of been getting at this point obliquely in my previous posts:

quote:

Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick.

There's an analogous sort of thing with "killer robots". They're a nebulous big-bad with moving goalposts that can never actually be achieved in real life. In the abstract there's a line we shouldn't cross, but that line is always far away from the present applications of the technology, whatever those are.

Most people agree that if Daniel Greystone invented the MCP and we had armies of walking talking Cylons, or we had Skynet ordering missions, that would be a bad thing. But we're totally comfortable with long-range weapons that identify their target and guide themselves toward them, that's just guidance, humans are still setting the target. We're totally comfortable with computers that identify targets for us in ways that we can't possibly double-check within combat, humans might still put their lives on the line and choose not to pull the trigger. We're totally comfortable with weapons systems that can autonomously lock and fire on targets without human intervention, those other machines attack too fast for humans to respond so we really need it and humans are still the ones turning the auto-kill mode on and off. Those things aren't totally different, they're points along the same spectrum of automation and mechanization of combat.

Whatever drones can do in the future, short of Strong AI being invented, we will tell ourselves that at the end of the day they are just weapons under our control. They follow the missions and use the engagement rules we program. It's not a killer robot, it's just a UAV with the capability to autonomously complete missions if control is lost. It'd be great on UAV strike missions, or for a team of UAVs enforcing a no-fly zone. It's just a combination of technologies which everyone understands and is comfortable with, like a cruise missile using internal guidance if it's jammed. It would seem necessary to operate in hostile territory, and like UAVs generally it helps you get more out of your pool of pilots. I can't see it not happening in a serious conflict.

Of course this is a long-term view, but in the short-term I think the technology for unmanned air-combat aircraft isn't far off (say 20 years). I certainly see major advantages to militaries deploying killer robots, and public opinion has never really mattered when the balance of power is at stake.

Paul MaudDib fucked around with this message at 02:10 on Jul 7, 2014

KomradeX
Oct 29, 2011

hobbesmaster posted:

Reminder: The US shot down an Iranian civilian airliner on a daily scheduled route that was in contact with the appropriate ATC and on a common airway. Instead of apologizing we told them to go gently caress themselves, the US never apologizes.

That is what I referring too, I just couldn't remember what the flight was called, but hell if I could forget KAL-700.

shrike82
Jun 11, 2005

Paul MaudDib posted:

The problem is that we're dealing with a nebulous concept of "killer robots" rather than the incremental way technology advances in the real world. I've sort of been getting at this point obliquely in my previous posts:


There's an analogous sort of thing with "killer robots". They're a nebulous big-bad with moving goalposts that can never actually be achieved in real life. In the abstract there's a line we shouldn't cross, but that line is always far away from the present applications of the technology, whatever those are.

Most people agree that if Daniel Greystone invented the MCP and we had armies of walking talking Cylons, or we had Skynet ordering missions, that would be a bad thing. But we're totally comfortable with long-range weapons that identify their target and guide themselves toward them, that's just guidance, humans are still setting the target. We're totally comfortable with computers that identify targets for us in ways that we can't possibly double-check within combat, humans might still put their lives on the line and choose not to pull the trigger. We're totally comfortable with weapons systems that can autonomously lock and fire on targets without human intervention, those other machines attack too fast for humans to respond so we really need it and humans are still the ones turning the auto-kill mode on and off. Those things aren't totally different, they're points along the same spectrum of automation and mechanization of combat.

Whatever drones can do in the future, short of Strong AI being invented, we will tell ourselves that at the end of the day they are only following human orders. Humans approved the machine-optimized interdiction plan and pushed the button to deploy the drones on the mission, therefore it's not a killer computer, they're just tools, and everyone completely understands how each of the individual tools works and has years of experience with them.

Of course this is a long-term view, but in the short-term I think the technology for unmanned air-combat aircraft isn't far off (say 20 years). I certainly see major advantages to militaries deploying killer robots, and public opinion has neveraq really mattered when the balance of power is at stake.

And remember, we're totally OK with indiscriminate weapons like landmines anyway.

So the argument is that because the military is going to force it down our throats, we have to accept it?

Remind me about why you're against nsa surveillance given that it's a logical extension of all the advances made in data mining and nlp?

Rent-A-Cop
Oct 15, 2004

I posted my food for USPOL Thanksgiving!

namesake posted:

Honest question though: how much civilian traffic do you get during an aerial war combat zone? I imagine anyone with any choice would be steering clear, and if you can program in flight routes then telling drones to stay out of lines of attack from those civilian routes and likewise ignore planes which are sticking to those routes is the start of making the idea acceptable to the masses.
Surprisingly enough there's rather a lot of air traffic over some really poo poo places and it only takes one oops to be a real shitshow. A drone that's a little confused about where exactly it is could get quite killy if it wandered off station.

shrike82
Jun 11, 2005

Or that we've already deployed drones within the States for law enforcement and surveillance purposes. Given the history of military-> police technology and equipment transfers, it's not unthinkable that autonomous drones get deployed within the US.

Randalor
Sep 4, 2011



KomradeX posted:

That is what I referring too, I just couldn't remember what the flight was called, but hell if I could forget KAL-700.

You mean Iran Air Flight 655? KAL-007 was a plane shot down by the soviets after diverting way off course.

Spaceman Future!
Feb 9, 2007

Rent-A-Cop posted:

Surprisingly enough there's rather a lot of air traffic over some really poo poo places and it only takes one oops to be a real shitshow. A drone that's a little confused about where exactly it is could get quite killy if it wandered off station.

A computer is capable of being much more accurate than any human at identifying airframes at extreme distances. I don't get this track of argument at all, it would require humans to be absolutely infallible which we have a good 50,000 years of evidence indicating that we really aren't. Especially if your argument depends on things like getting "lost" and improperly identifying shapes at distances past the human eye can properly function. At best a human can give you their general vicinity, and count the contrails and general heading of a target 10 miles out. A computer can give you its exact GPS provided coordinates, its heading and speed instantly corrected for instrument recession, pattern recognition as far out as you can build lenses for, immediate radar signature crossreferencing, and above all it doesn't get finger twitch on the trigger or bloodlust when another predator gets blown out of the sky.

JeffersonClay
Jun 17, 2003

by R. Guyovich

Paul MaudDib posted:

Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick.

Which is super-ironic because human consciousness is itself a bunch of different algorithms mixed together that people revere as some sort of magic miracle.

Adbot
ADBOT LOVES YOU

FADEtoBLACK
Jan 26, 2007
You can complain about the possibility of an automated weapon system making a mistake over a human being but anyone involved in high level military decision making is looking at the "supply lines and morale are now meaningless and except for regular maintenence my infantry, armor, and air forces never sleep, eat, or have to stop for rest. All of this talk of "but they might make horrible mistakes" Ignores that they have less of a chance of making them and if one of them gets wiped out the rest of the robots won't be storming a family home for revenge.

I mean I don't like it either, but you're ignoring so much poo poo in our society to even not believe for one second that in the context of America going to war this is pretty much already planned and paid for and at the very least will scare the poo poo out of anyone who wants to find out what its like to fight a robot with faster response times and better "sensors" than a human being.

The whole point of the American Military is to make normal war untenable for everyone but us and people we like, this is going to happen even if an army of robots malfunctions and kills a whole city worth of children, at the end of the day Americans will believe that none of the people they like are being killed because they're robots now and if they have to be deployed any where it was for a good reason and the people there are obviously bad.

FADEtoBLACK fucked around with this message at 06:10 on Jul 7, 2014

  • Locked thread