|
FADEtoBLACK posted:So that's it? The US needs to invest in fighter tech for the sake of economic export?
|
# ? Jul 6, 2014 07:21 |
|
|
# ? Apr 25, 2024 09:36 |
|
Paul MaudDib posted:The main engagement strategy for an aircraft like an F-22 is "fly towards enemy, at 80 miles out shoot long-range missiles before we show up on their radar" which is not exactly a complicated mission to automate. Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong?
|
# ? Jul 6, 2014 07:49 |
|
MeramJert posted:Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong? He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO. Which is programmable when you out range and out sense everything and can also respond faster than a human being.
|
# ? Jul 6, 2014 07:56 |
|
FADEtoBLACK posted:He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO. Did you watch the video? That guy was claiming this doesn't happen ever, not just that the F35 isn't capable of it.
|
# ? Jul 6, 2014 07:57 |
|
I'm not sure if I did, was it the guy who was involved with the F16 talking about the 22/35? I do trust him but you have to remember the range of the engagement matters on what your sensor and weapons package is, not on what plane your flying. There are articles talking about how they have flight simulations of weapons systems that are still being worked on. Look, I agree with you on doubting they can do this right now but this is more of a 'feature creep' thing where every future aircraft will be steadily increasing engagement range and weapon capability. The F22 is an optimal but horribly expensive weapons platform and the F35 exists because the dream project for Lockheed is a fighter built in every state whose development is always ongoing and never ends and they have sole contract rights on maintenance and production. This is what happens when you give the military industrial complex 'something to do'
FADEtoBLACK fucked around with this message at 08:07 on Jul 6, 2014 |
# ? Jul 6, 2014 08:04 |
|
MeramJert posted:Did you watch the video? That guy was claiming this doesn't happen ever, not just that the F35 isn't capable of it. That really depends on the engagement and location. The issue has never been "can we blow it up from 100 miles away", it has always been "are were totally sure that's a Mig and not one of our blackhawks". As automation of mission logistics improves the chances that youre mis-identifying a friendly at that distance are significantly lower, and with the automation of warfare in most cases now if you are incorrect you will more than likely end up accidentally blowing up one of your own predators rather than a manned plane. Of course, the only foolproof way to fix that long range conundrum is to get all these manned fighters in mothballs and use strictly unmanned tech, which is the smart thing to do anyway since any unmanned tech is going to be controlled from a central source and will be able to identify eachothers positions easily because all the commands and controls can be centrally audited in real time. Plus, air superiority is inherent to the predator drone simply by default of cost. For the cost of a single F-35 you can field roughly 40 predators, if all 40 engage a single air target you win even if a few get shot down by the "superior" platform. Why the hell are we spending money on this horseshit again? Cant we just give money to workers in manufacturing districts instead? It seems it would be cheaper just to cover their salaries and we would save money on building materials, facilities and all that other poo poo. Everybody wins. Spaceman Future! fucked around with this message at 17:56 on Jul 6, 2014 |
# ? Jul 6, 2014 08:46 |
|
He is probably talking about Pierre Sprey. In response, the tennis court is 104-0.
|
# ? Jul 6, 2014 09:00 |
|
Paul MaudDib posted:The F-35 is just a piece of designed-by-commitee poo poo. So far as I can see it's the expensive modern-day equivalent of the F-4. It's designed to fit everyone's needs and now it sucks at most of its roles. Nevertheless, we lack any real competition in the air-power world, so it'll probably do just fine. It's not going to realistically come up against cutting-edge Russian interceptors so far as I can see. Actually it's the modern-day equivalent of the F-105 Thunderchief quote:F-105 quote:F-35 Okay I lied, it's heavier, slower and got higher wing-loading. Pimpmust fucked around with this message at 09:18 on Jul 6, 2014 |
# ? Jul 6, 2014 09:11 |
|
hobbesmaster posted:Eritean-Ethiopian war had dogfights between jets in 2002. Eritrea and Ethiopia cannot afford F-35s, though. They probably couldn't afford to operate them even if you gave them for free. $32K per flight hour is a lot when your country's GDP is barely $4 billion (as is the case for Eritrea). There's a reason countries in Africa and South America choose the Gripen, and it's not because of its incredible performances. Even the Swiss, who aren't exactly short on money, chose the Gripen for its low operating cost, their Air Force would have preferred the Rafale but they went with the cheaper option and then the population voted not to buy anything after all. Which makes sense because they're not even using their air force for anything, so what's the point? FADEtoBLACK posted:He was talking about the F22, not the F35. There are youtube videos of the F22 being beat in short-range engagements by more agile fighters. It's designed to use superior sensors and weapons at long range and then GTFO. The thing is that it's generally much cheaper to retrofit superior sensors on a tried and true airframe.
|
# ? Jul 6, 2014 09:58 |
|
Barlow posted:Even if we lived in some alternate universe where air-to-air superiority is worth spending more than the GDP of the 16th wealthiest nation on earth it's irrelevant in this case. We aren't talking about the F-22 here, which is an insanely expensive but working plane that does air combat well and will for the foreseeable future, we're talking the F-35. Did we really need another plane that would win an air war against MIGs? I think the f-22 is actually less expensive than F-35 at this point.
|
# ? Jul 6, 2014 10:57 |
|
I really wasn't attempting to defend the f22. Any new manned aircraft that doesn't haul a large amount of humans doesn't need to be manned anymore and all the existing stuff is going to be good enough and still is in production. I'm pretty sure the only reason the f22 and the f35 exist is because of how isolated the rich are and think that anything promised will be delivered and will be worth it and if it isn't then oh well it's American.
|
# ? Jul 6, 2014 11:41 |
|
MeramJert posted:Is this actually true? That youtube video of the aircraft designer talking about the F35 says they don't and can't actually do this, and it's just a technological dream. I trust he knows what he's talking about more than I trust a bunch of nerds on D&D. Are you saying he's wrong? One needs to distinguish between capability, theory, practicality, and reality here. In theory, the F22's main strategy is long-distance beyond-visual-range engagements with missiles, and it was built with the intention of having that capability. In reality, though, there aren't too many instances of that actually happening in real combat, because the rules of engagement don't usually permit that for practical reasons. It's what the F22 is supposedly supposed to do, but it's also a tech-fetish dream that doesn't actually happen on the battlefield. That's my understanding, anyway.
|
# ? Jul 6, 2014 17:23 |
|
Main Paineframe posted:One needs to distinguish between capability, theory, practicality, and reality here. In theory, the F22's main strategy is long-distance beyond-visual-range engagements with missiles, and it was built with the intention of having that capability. In reality, though, there aren't too many instances of that actually happening in real combat, because the rules of engagement don't usually permit that for practical reasons. It's what the F22 is supposedly supposed to do, but it's also a tech-fetish dream that doesn't actually happen on the battlefield. That's my understanding, anyway. It's been that way since Vietnam, and every time guns and increasingly short range missiles ended up being more important than BVR stuff.
|
# ? Jul 6, 2014 18:09 |
|
DeusExMachinima posted:If you have a relatively defined moveset, it is piss easy to create a learning heuristic without any advances in AI or computing whatsoever. If contact with the remote pilot is lost, the drone can take over and at least stand a chance. And if it gets shot down, all the other drones will know not to make the same mistake against that enemy(just like taking beads out of a box). If the drone loses link with the remote pilot it will most likely be due to jamming gear used by the enemy, in which case it won't be able to share its "learnings" with the other drones either.
|
# ? Jul 6, 2014 18:29 |
|
enraged_camel posted:If the drone loses link with the remote pilot it will most likely be due to jamming gear used by the enemy, in which case it won't be able to share its "learnings" with the other drones either. This isn't Cylon raiders here, drones wouldn't be "learning" much on the battlefield. You would cook up one program - a good set of reward/danger heuristics, a set of possible moves, and an algorithm for evaluating the moves against the reward/danger heuristics for as many possible plies into the future - and freeze it for deployment. The "learning" is actually training the reward/danger heuristics (in a machine-learning sense) to properly value positioning, accomplishment of goals, etc. And machine learning is a VERY iterative process so you'd do this in a simulator. Ideally you would be able to send back the drone autopilot's system inputs (radar view, etc) with the greatest possible detail and frequency to be able to better train the system in the future, but it's probably more realistic to store that on a disk rather than uplink it live, and it's not really essential to the training process. You might be able to do some very simplistic live strategy refinements, like "my strategy X to counter enemy strategy Y in situation Z got me blown up" and immediately down-weight that choice for that situation, but overall the idea is to improve your reward/danger heuristic and figure out why that combination got you blown up (gave an enemy a clean shot, etc). Or I guess the other way I can read your statement is as a generic "what happens when Link-16 gets jammed on a drone", in which case they won't be able to share tactical data and will have to do their best with their own sensors and IFF just like a pilot would. Paul MaudDib fucked around with this message at 22:02 on Jul 6, 2014 |
# ? Jul 6, 2014 18:40 |
|
Yeah, I'd rather not have drones making decisions to kill autonomously.
|
# ? Jul 6, 2014 20:51 |
|
Yeah well Macross style Ghost drones are probably a weee bit longer off than most imagine. Considering the cost and all the bugs in the coding on the F-22/F-35, having a proper drone AI that doesn't poo poo the bed at even ideal circumstances isn't going to be either cheap or easy. A little like the first two generations of AtA missiles (Hey, BVR is gonna be so awesome guys! *Missile tries to shoot down the sun*). But hey, maybe in 50 years?
|
# ? Jul 6, 2014 22:00 |
|
shrike82 posted:Yeah, I'd rather not have drones making decisions to kill autonomously. That's not being discussed here at all.
|
# ? Jul 6, 2014 22:07 |
|
Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes. Anyway, I find it funny that people are still going wow the power of algorithms will do magic. I'm picturing Paul MuadDib posting a decade ago about data mining and how the NSA would be able to safely trawl through information to save us from the terrorists. shrike82 fucked around with this message at 22:30 on Jul 6, 2014 |
# ? Jul 6, 2014 22:20 |
|
shrike82 posted:Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes. Skynet will be an improvement over existing human governments.
|
# ? Jul 6, 2014 22:29 |
|
shrike82 posted:Setting aside the freshman level explanations of machine learning, posters upstream were discussing having drones function autonomously in contested airspace where operators can't remote in. It's not a stretch to see it extended to having the unarmed drones function autonomously in general, and then extended again to autonomous weapons control for self defense, and finally weapons control for offensive purposes. Autonomous weapons control already exists, it's called a "missile". Close-in-weapons-systems go all the way to controlling the initial firing/launch. On the ground, there's systems like the Sampson Remote Control Weapons Station, the current implementation still asks the operator to confirm before it fires but there's no reason it couldn't operate on automatic to create a killzone. We already essentially rely on computers to select targets for us, there's simply no way for pilots to externally verify the target in many engagement situations. And they haven't showed any hesitancy about pulling the trigger when the targeting is ambiguous (see: Collateral Murder). Functionally I don't see a difference, a doctrine of positive identification or shoot-first ask-later is what matters, not whether it's implemented by a computer or by a person in a pilot's seat. The traditional argument from drone operators is that being halfway around the world gives them the physical safety and emotional distance to make rational choices, that argument applies equally well to computer control. Particularly since until the control links get jammed the operators will be the ones operating the drones. I don't really like it as such, but I think it's totally unavoidable, weapons systems have become nothing but more autonomous as time goes on. And once someone does it, no one will want to be stuck pitting big manned fighters against small maneuverable drone fighters. shrike82 posted:Anyway, I find it funny that people are still going wow the power of algorithms will do magic. I'm picturing Paul MuadDib posting a decade ago about data mining and how the NSA would be able to safely trawl through information to save us from the terrorists. It's not really "the power of algorithms", game-playing algorithms have been around since forever, it's how far portable supercomputing has come in the last 5-10 years. It's now totally reasonable to envision a small supercomputer that could be put into a drone, with enough power to reasonably handle real-time tactical control of an aircraft. 20 years ago it would have been ludicrous to expect a computer to be able to handle high-level natural-language processing in combination with data searching, and now IBM has a machine that can play Jeopardy. Yeah, it had a rough start, it still has its flaws, and I expect the first-gen fighter drones would too. Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick. Paul MaudDib fucked around with this message at 23:31 on Jul 6, 2014 |
# ? Jul 6, 2014 23:07 |
|
If you have trouble telling the difference between a missile and an autonomous drone then I'm not sure what can I do to help you.
|
# ? Jul 6, 2014 23:12 |
|
shrike82 posted:If you have trouble telling the difference between a missile and an autonomous drone then I'm not sure what can I do to help you. So systems like Aegis or Phalanx don't make autonomous target-and-kill decisions on their own, is that what you're claiming? Because that's just wrong. quote:The basis of the system is the 20 mm M61 Vulcan Gatling gun autocannon, used since the 1960s by the United States military in nearly all fighter aircraft (and one land mounting, the M163 VADS), linked to a Ku-band radar system for acquiring and tracking targets. This proven system was combined with a purpose-made mounting, capable of fast elevation and traverse speeds, to track incoming targets. An entirely self-contained unit, the mounting houses the gun, an automated fire control system and all other major components, enabling it to automatically search for, detect, track, engage, and confirm kills using its computer-controlled radar system. quote:Goalkeeper is a Dutch close-in weapon system (CIWS) introduced in 1979 and in use as of 2014. It is an autonomous and completely automatic weapon system for short-range defense of ships against highly maneuverable missiles, aircraft and fast maneuvering surface vessels. Once activated the system automatically performs the entire process from surveillance and detection to destruction, including selection of the next priority target. Like it or not, we're already at the point where computers are selecting targets and pulling the trigger themselves. I think there's an argument to be made about the degree of supervision we exercise on killer computers, but I think that has more to do with the relative infancy of the technology at present. For the most part, generation-1 fighter drones would be controlled from a command link just like a Predator or a CIWS system, and as the technology gets more mature the autopilot will be supervised and directly controlled less and less. You can see the same "supervision creep" in missiles. Now that the technology is mature, we have missiles (eg AIM-9X) that we launch at a target without positive lock (for example, from an internal bay, or at targets that are substantially off-bore), and we trust that the missile will perform complex maneuvers and then point its seeker at the target the pilot commanded. In that case the pilot is still pulling the trigger, but we're requiring a substantial amount of "intelligent" behavior from the missile. At the end of the day, until we have Skynet there will always be a human involved at some level - choosing when to enable the CIWS/combat autopilot, planning the mission, programming the autopilots, etc. We can talk ourselves into being content with whatever scrap of mental cover we give ourselves. As our comfort level increases the human involvement will just decrease, that's all. Paul MaudDib fucked around with this message at 23:53 on Jul 6, 2014 |
# ? Jul 6, 2014 23:18 |
|
I'm not a military expert and I suspect you aren't either given that you're relying on Google to bring up the aegis. It looks like there's still human control of the platform given that it shot down an Iranian jetliner.
|
# ? Jul 6, 2014 23:31 |
|
Close-in weapon systems need to be entirely automated because they need to open fire on the threat as soon as it's detected. Each fraction of a second counts.Wikipedia posted:This also makes the timeframe for interception relatively short; for supersonic missiles moving at 1500 m/s it is approximately one-third of a second. The up-side is that it's a defense system placed on specific military installations (warships) and programmed to attack things that look like missiles. That's a big difference with, say, some ED-209 police bot programmed to shoot criminals (criminals being identified as human beings not in police uniforms).
|
# ? Jul 6, 2014 23:42 |
|
Yes, so the argument against completely automating a weapons unit has already been defeated based upon the need to target quickly, and so the argument for what is needed will advance so long as it is technically possible. Sensible people with concerns about completely automating attack weaponry will lose against the evidence that humans are more than capable of attacking friendlies and innocents and full automation will improve total strike area coverage and such.
|
# ? Jul 6, 2014 23:51 |
|
There's going to be a lot of noise the first time an autonomous drone shoots down an airliner.
|
# ? Jul 6, 2014 23:54 |
|
namesake posted:Yes, so the argument against completely automating a weapons unit has already been defeated based upon the need to target quickly, and so the argument for what is needed will advance so long as it is technically possible. Sensible people with concerns about completely automating attack weaponry will lose against the evidence that humans are more than capable of attacking friendlies and innocents and full automation will improve total strike area coverage and such.
|
# ? Jul 7, 2014 00:06 |
|
Rent-A-Cop posted:There's a reason automated systems are only trusted in tasks where their target cannot possibly be anything but a target. It isn't like 737s routinely fly around at mach 3 a foot above the water. Bombers and airliners look remarkably similar to a computer though. Honest question though: how much civilian traffic do you get during an aerial war combat zone? I imagine anyone with any choice would be steering clear, and if you can program in flight routes then telling drones to stay out of lines of attack from those civilian routes and likewise ignore planes which are sticking to those routes is the start of making the idea acceptable to the masses. Don't get me wrong, autonomous killing platforms is an awful idea but it is going to be argued for strongly.
|
# ? Jul 7, 2014 00:13 |
|
namesake posted:Honest question though: how much civilian traffic do you get during an aerial war combat zone? Let's ask Iran
|
# ? Jul 7, 2014 00:32 |
|
KomradeX posted:Let's ask Iran Reminder: The US shot down an Iranian civilian airliner on a daily scheduled route that was in contact with the appropriate ATC and on a common airway. Instead of apologizing we told them to go gently caress themselves, the US never apologizes.
|
# ? Jul 7, 2014 00:40 |
|
namesake posted:Don't get me wrong, autonomous killing platforms is an awful idea but it is going to be argued for strongly. The problem is that we're dealing with a nebulous concept of "killer robots" rather than the incremental way technology advances in the real world. I've sort of been getting at this point obliquely in my previous posts: quote:Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick. There's an analogous sort of thing with "killer robots". They're a nebulous big-bad with moving goalposts that can never actually be achieved in real life. In the abstract there's a line we shouldn't cross, but that line is always far away from the present applications of the technology, whatever those are. Most people agree that if Daniel Greystone invented the MCP and we had armies of walking talking Cylons, or we had Skynet ordering missions, that would be a bad thing. But we're totally comfortable with long-range weapons that identify their target and guide themselves toward them, that's just guidance, humans are still setting the target. We're totally comfortable with computers that identify targets for us in ways that we can't possibly double-check within combat, humans might still put their lives on the line and choose not to pull the trigger. We're totally comfortable with weapons systems that can autonomously lock and fire on targets without human intervention, those other machines attack too fast for humans to respond so we really need it and humans are still the ones turning the auto-kill mode on and off. Those things aren't totally different, they're points along the same spectrum of automation and mechanization of combat. Whatever drones can do in the future, short of Strong AI being invented, we will tell ourselves that at the end of the day they are just weapons under our control. They follow the missions and use the engagement rules we program. It's not a killer robot, it's just a UAV with the capability to autonomously complete missions if control is lost. It'd be great on UAV strike missions, or for a team of UAVs enforcing a no-fly zone. It's just a combination of technologies which everyone understands and is comfortable with, like a cruise missile using internal guidance if it's jammed. It would seem necessary to operate in hostile territory, and like UAVs generally it helps you get more out of your pool of pilots. I can't see it not happening in a serious conflict. Of course this is a long-term view, but in the short-term I think the technology for unmanned air-combat aircraft isn't far off (say 20 years). I certainly see major advantages to militaries deploying killer robots, and public opinion has never really mattered when the balance of power is at stake. Paul MaudDib fucked around with this message at 02:10 on Jul 7, 2014 |
# ? Jul 7, 2014 00:54 |
|
hobbesmaster posted:Reminder: The US shot down an Iranian civilian airliner on a daily scheduled route that was in contact with the appropriate ATC and on a common airway. Instead of apologizing we told them to go gently caress themselves, the US never apologizes. That is what I referring too, I just couldn't remember what the flight was called, but hell if I could forget KAL-700.
|
# ? Jul 7, 2014 00:58 |
|
Paul MaudDib posted:The problem is that we're dealing with a nebulous concept of "killer robots" rather than the incremental way technology advances in the real world. I've sort of been getting at this point obliquely in my previous posts: So the argument is that because the military is going to force it down our throats, we have to accept it? Remind me about why you're against nsa surveillance given that it's a logical extension of all the advances made in data mining and nlp?
|
# ? Jul 7, 2014 01:17 |
|
namesake posted:Honest question though: how much civilian traffic do you get during an aerial war combat zone? I imagine anyone with any choice would be steering clear, and if you can program in flight routes then telling drones to stay out of lines of attack from those civilian routes and likewise ignore planes which are sticking to those routes is the start of making the idea acceptable to the masses.
|
# ? Jul 7, 2014 02:14 |
|
Or that we've already deployed drones within the States for law enforcement and surveillance purposes. Given the history of military-> police technology and equipment transfers, it's not unthinkable that autonomous drones get deployed within the US.
|
# ? Jul 7, 2014 02:25 |
|
KomradeX posted:That is what I referring too, I just couldn't remember what the flight was called, but hell if I could forget KAL-700. You mean Iran Air Flight 655? KAL-007 was a plane shot down by the soviets after diverting way off course.
|
# ? Jul 7, 2014 02:25 |
|
Rent-A-Cop posted:Surprisingly enough there's rather a lot of air traffic over some really poo poo places and it only takes one oops to be a real shitshow. A drone that's a little confused about where exactly it is could get quite killy if it wandered off station. A computer is capable of being much more accurate than any human at identifying airframes at extreme distances. I don't get this track of argument at all, it would require humans to be absolutely infallible which we have a good 50,000 years of evidence indicating that we really aren't. Especially if your argument depends on things like getting "lost" and improperly identifying shapes at distances past the human eye can properly function. At best a human can give you their general vicinity, and count the contrails and general heading of a target 10 miles out. A computer can give you its exact GPS provided coordinates, its heading and speed instantly corrected for instrument recession, pattern recognition as far out as you can build lenses for, immediate radar signature crossreferencing, and above all it doesn't get finger twitch on the trigger or bloodlust when another predator gets blown out of the sky.
|
# ? Jul 7, 2014 05:31 |
|
Paul MaudDib posted:Artificial intelligence is kind of funny because it's such a moving target. The Turing Test was the original test for "artificial intelligence" and now that we can take reasonable shots at beating it, it's dismissed as being "just a digital parrot that strings together words believably". Watson is just decomposing natural language, extracting the features, performing data query, and synthesizing natural language responses, it's just a couple of different algorithms mixed together! Nothing is ever "achieved", it's all just dismissed as a magic trick. Which is super-ironic because human consciousness is itself a bunch of different algorithms mixed together that people revere as some sort of magic miracle.
|
# ? Jul 7, 2014 05:38 |
|
|
# ? Apr 25, 2024 09:36 |
|
You can complain about the possibility of an automated weapon system making a mistake over a human being but anyone involved in high level military decision making is looking at the "supply lines and morale are now meaningless and except for regular maintenence my infantry, armor, and air forces never sleep, eat, or have to stop for rest. All of this talk of "but they might make horrible mistakes" Ignores that they have less of a chance of making them and if one of them gets wiped out the rest of the robots won't be storming a family home for revenge. I mean I don't like it either, but you're ignoring so much poo poo in our society to even not believe for one second that in the context of America going to war this is pretty much already planned and paid for and at the very least will scare the poo poo out of anyone who wants to find out what its like to fight a robot with faster response times and better "sensors" than a human being. The whole point of the American Military is to make normal war untenable for everyone but us and people we like, this is going to happen even if an army of robots malfunctions and kills a whole city worth of children, at the end of the day Americans will believe that none of the people they like are being killed because they're robots now and if they have to be deployed any where it was for a good reason and the people there are obviously bad. FADEtoBLACK fucked around with this message at 06:10 on Jul 7, 2014 |
# ? Jul 7, 2014 06:03 |