Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
FlamingLiberal
Jan 18, 2009

Would you like to play a game?



tentative8e8op posted:

I wasn't able to find anything about a new NSA leaker, but last month a State Department employee wrote an interesting article in the Washington Post
Yeah that was the guy, just saw something about it earlier and thought it was new.

Adbot
ADBOT LOVES YOU

Salt Fish
Sep 11, 2003

Cybernetic Crumb
Just saw this on Schneier blog:

https://twitter.com/daveweigel/status/499588232494211072/photo/1



The NSA claims that Snowden was a low level computer janitor while Snowden claims he was a higher level spy. Kind of an interesting picture but nothing too huge.

SubG
Aug 19, 2004

It's a hard world for little things.

tentative8e8op posted:

A corporation I'm doing business with is not the same as military and law enforcement intelligence agencies who have agreements with many corporations, who sends literal spies into corporations to sneak data out without notice, who maliciously hack into said corporations to steal information, and who will file secret orders in a secret court under a secret interpretation of a vague law to obtain any records. Their invasion of privacy occurs when information is outright taken by a fourth party with and for identifying information and content, especially when such data is taken illegally.
Unless you're trying to make a narrow statutory/regulatory/Constitutional point here, I think you're missing the larger issue. If the same third party entity ends up in possession of the information from an individual privacy standpoint does it matter if the mechanism of the transfer was a sale, willing compliance with a request from a LEA, compelled compliance with a court order, or some sort of covert black bag super spy poo poo? My position is that, again purely from an individual privacy standpoint, the difference is de minimis.

tentative8e8op posted:

I'm sure it'd be simple to break into my home when I'm away at work, would you consider my diaries, journals and sketchpads to be private? I do.
I think this is a bad model. If you're talking about a computer (or other information system) that's physically and logically segregated from the internet than that's one thing. But hook that computer up to the internet and start making data transfers that cross state and national boundaries, doing things like posting to this message board (presumably you don't consider this post, or yours, to be private), then I think the situation is meaningfully different.

In public places you have no general expectation of privacy. If you wear a mask you might end up preventing a random stranger from photographing your face as you walk down the street. But if you're wearing a paper mask and the wind blows it off and that stranger photographs you, de jure you have not had your privacy violated because you had no reasonable expectation of privacy in the first place. If it turns out that there were a bunch of spooks in the bushes waving fans around to help the wind dislodge your mask that might be a separate, again de jure, issue but that doesn't change the fact that, because you were in public, you had no general reasonable expectation of privacy.

tentative8e8op posted:

I don't think we're in so dystopian a situation as to have no expectation of privacy, I just feel America's intelligence and law enforcement culture needs to be reigned in.
I think we live in a surveillance state and we don't consider it dystopian because we've defined down dystopianism. And I say `we', not `you'; I don't feel particularly oppressed or twitchy or whatever.

But up above you make an implicit argument that there's not a privacy issue with corporate use of your data because you're willingly entering into an agreement with the corporation. I think that's really bunk. It has become effectively impossible to opt out of surveillance. You cannot travel on the roads without being the subject of surveillance of traffic cameras. You cannot travel by air without being subject to having your person searched---possibly by something as invasive as `full body scan'. You cannot use the internet or a phone without having your activities profiled in intricate detail. And so on. In order to avoid this `background' surveillance one would effectively have to become a crazy Luddite recluse. And that's just private poo poo. I don't even know how you'd expect to obtain or hold down a salaried job without leaving a giant surveillance footprint.

And beyond all of that, I'd be willing to bet even people with comparatively sophisticated understanding of privacy issues compared to the general population (like I expect most of the readers of this thread are, in that they've thought about the subject at all) probably don't understand all of the privacy agreements they've made, implicitly or explicitly, with all of the third parties who right now have their data. If I asked you to enumerate all of the personal data you've shared and with whom and what they are permitted to do with this data I'd be surprised if you got half of it. I can't prove this of course. Nobody can. That's kinda the point.

And again: I'm not saying that this is the worst of all possible worlds or that we should smash all the computers and retreat to some bullshit agrarian lifestyle. I'm just saying that this is the new baseline, this is the world we live in, and that's the context in which we have to discuss poo poo like the NSA disclosures.

ShadowHawk posted:

There is a major difference between plugging a splitter into a datacenter and hacking every individual email server or desktop computer/phone in the country. The former exposes the attack to everyone, and when software exploits are done en masse they can be quickly detected and fixed (think about what happens when a code-red style computer virus gets into the wild). The NSA likely has exploits for every system out there, but just like antibiotics if they try to use them on everything they'll lose their effectiveness. The NSA doesn't have an infinite supply of these things.
It is impossible to know for sure because of the nature of the problem, but from the publicly known data it appears that the ratio of undetected to detected/reported/remediated exploits could easily be on the order of millions to one. Exploits that deface or delete content or adversely affect performance are far more likely to be observed than ones that do not cause such effects. If a worm or virus took no other action than e.g. replacing an encryption library I suspect the detection rate would be close to zero. And if it was detected it isn't as if there would be a readme.txt that identified the exploit as the work of the NSA or whatever.

ShadowHawk posted:

Encryption is the difference between the NSA knowing that I emailed a private confession to my mistress and the NSA knowing that I sent an email on the same day some other gmail user received one.
If they're monitoring all your email and phone traffic then even if it's encrypted and they can't read any of it or get access to the clear communications via some other means, they still know that she's your mistress.

ShadowHawk posted:

The mail servers can act as this intermediary. If email were encrypted properly the NSA couldn't know that an email sent was the same as one received. "Gets 12 emails from gmail a day" is a lot less of a privacy leak than "Gets 2 emails from his wife, 7 emails from his mistress, 0 emails from his kids, and 3 emails from a therapist".
Email addresses are not, and cannot (with public MTAs) be encrypted. So if everyone you communicate with is using the same mail service (so everything is delivered internally on the company's mail harness and never traverses public networks), and assuming the analyst looking at you has no visibility into their servers, then you might compel the analyst to use other methods. But if any of those presumptions aren't true---e.g. if you're using gmail and your mistress is on yahoo---then the fact that the content is encrypted won't prevent an analyst from being able to extract information about your social network from traffic analysis.

ShadowHawk posted:

The only way to get the latter is to directly hack into the mail server or sending computer -- and, as above, the NSA can't do that to every single one out there without risking losing their means of doing so.
Or they target for compromise the individuals for whom they can't obtain information via other methods. Again, this is literally something which we know they have both the methods and inclination to do per the Snowden disclosures. This is a thing that they are apparently capable of, and in fact have done, at scale and on an effectively automated (e.g. click-to-compromise) basis with very high success rates.

I mean if you want to imagine some cool cyberpunk future where everyone encrypts everything than yeah, the exact methods and programmes that the NSA has in place wouldn't be effective. But that's not the world we're living in. In the real world what we know is that using encryption is something that the NSA will use as evidence, in and of itself, for targeting the individual for additional surveillance. If everyone everywhere used encryption for everything at all times this policy wouldn't work. But it isn't as if they'd just throw up their hands and say `well, they won'. They'd use the other technologies and methods they've already developed for dealing with this, and they'd presumably develop new ones.

So in purely practical terms `just use encryption' doesn't work because we know they've got a gameplan for that, and in ideological (or however you want to say it) terms it doesn't work because a universal shift toward encryption would just produce a universal shift in surveillance technology just like every other drat advance in communication technology. Which has in the end, every time, resulted in greater capacity for information collection and surveillance.

I mean I'm not saying don't use encryption. Do. But, as I said before, the threat model using encryption protects you from is your neighbor's kid who's good with computers, not the loving NSA.

Kobayashi posted:

This is a good example. I don't think you're trying to claim that e-commerce is fundamentally flawed.
Well, I think it is but not in this sense. Things can be fundamentally flawed from a security standpoint and still be functional. Like, for example, the way credit cards work---it's comically hosed and readily abused and everybody knows it, but it's something that the companies just deal with because it doesn't prevent the things from being used and they can pass most of the costs of the hosed security along to their customers.

I think most PKI architectures are fundamentally flawed for technical reasons probably not worth getting into here, but that doesn't mean that they don't work, most of the time, in preventing the jackasses on the same local loop from your cable provider from snagging your credit card info. But you shouldn't try to extrapolate this out to a more general or stronger notion of `security' any more than the fact that most months there aren't any fraudulent charges on your credit card bill means that credit card numbers are `secure'.

Kobayashi posted:

Encryption works. It makes mass, indiscriminate surveillance impossible.
Manifestly false.

In addition to the purely factual error, it is a categorical error to expect encryption to protect against surveillance. Properly designed, implemented, and deployed encryption (so a strict minority of all encryption in general) will provide protection against recovery of the plaintext from the cyphertext. This impinges on the subject of surveillance at the periphery, but that's about the size of it. Encryption is important because it is quantifiable and evaluable, and it is comparatively easy to commodify than many of the other aspects of broader information security. But no, encryption doesn't make mass surveillance impossible. Strongly no. Encryption isn't a superpower.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kobayashi posted:

Well-designed encrypted communication protocols do not leak that kind of metadata. It's not merely about encrypting the actual voice communication, but also about minimizing metadata leakage. Granted this is insanely complicated, and literally every secure communication service makes different tradeoffs, but if we talk about asynchronous communication (like email or IM), then the metadata leakage becomes less of a problem.

That's quite a claim. Even Tor has constant metadata leaks, and it's designed from the ground up for anonymity. That's true even when you're dealing with only things designed from the ground up to operate anonymously (i.e. .onion sites), and once you start transporting another insecure protocol over it there's an even higher chance of leakage.

At the end of the day the people writing code are only human, and it's fundamentally harder to secure software than it is to break it. Attackers only have to find one vulnerability to compromise people. If a system is decentralized, attackers can just as easily run nodes as anyone else, and that's a recipe for metadata leakage. If a system is centralized, well...

Paul MaudDib fucked around with this message at 01:08 on Aug 15, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Paul MaudDib posted:

That's quite a claim. Even Tor has constant metadata leaks, and it's designed from the ground up for anonymity. That's true even when you're dealing with only things designed from the ground up to operate anonymously (i.e. .onion sites), and once you start transporting another insecure protocol over it there's an even higher chance of leakage.
A classic example of data leakage from an encrypted channel is the recovery of keystrokes from timing information of encrypted ssh sessions. This recalls an earlier historical example, the development of TINA signatures---distinctive oscillograph patterns which were used to identify and track individual Kriegsmarine radio operators (and therefore ships and uboats) based on the timing of their Morse keying, independent of being able to decrypt the traffic transmitted.

Snak
Oct 10, 2005

I myself will carry you to the Gates of Valhalla...
You will ride eternal,
shiny and chrome.
Grimey Drawer

SubG posted:


In public places you have no general expectation of privacy. If you wear a mask you might end up preventing a random stranger from photographing your face as you walk down the street. But if you're wearing a paper mask and the wind blows it off and that stranger photographs you, de jure you have not had your privacy violated because you had no reasonable expectation of privacy in the first place. If it turns out that there were a bunch of spooks in the bushes waving fans around to help the wind dislodge your mask that might be a separate, again de jure, issue but that doesn't change the fact that, because you were in public, you had no general reasonable expectation of privacy.


I consider all of my forum posts, facebook posts, youtube comments etc. to be public. Analysis of this information my disturb me, but ultimately I cannot consider it a violation of privacy. Emails, on the other hand, should be private. Sending an email to someone, or possibly even a private facebook message, should be associated with a reasonable expectation of privacy.

Similarly to how a reporter observing and reporting on my traveling and entering a private party is not a violation of my privacy, but if they enter that party under false pretense then report on my actions within the party they have violated my privacy. Someone who was at the party but not under false pretense could report on my actions and that would not be a violation of my privacy (although it would probably be a violation of trust).

SubG
Aug 19, 2004

It's a hard world for little things.

Snak posted:

Emails, on the other hand, should be private.
I agree that this is how I would wish it to be, but that is not how e.g. Google looks at the subject, and the latter almost certainly has a greater impact on privacy in the real world than the former.

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

SubG posted:

I agree that this is how I would wish it to be, but that is not how e.g. Google looks at the subject, and the latter almost certainly has a greater impact on privacy in the real world than the former.
Google's view is that if you sent an email to a user of Google Mail, you should not be surprised if Google's servers look at it. This is accurate. The alternative is making spamblocking illegal unless it's clientside and run by the recipient (inherently less accurate than serverside).

Malloc Voidstar fucked around with this message at 07:18 on Aug 15, 2014

Snak
Oct 10, 2005

I myself will carry you to the Gates of Valhalla...
You will ride eternal,
shiny and chrome.
Grimey Drawer

SubG posted:

I agree that this is how I would wish it to be, but that is not how e.g. Google looks at the subject, and the latter almost certainly has a greater impact on privacy in the real world than the former.

And that's something I respect. If I use a service and I agree to some terms of service and don't read the closely enough, I don't really have a right to say my privacy is being violated. In that case, what happened was I didn't undestand my privacy, and that's on me. However, if I were to create software that actually facilitated my privacy, there is a strong indication that the current NSA behavior would not respect my legitimate privacy and would use any means necisarry to countermand it. The is a strong likelyhood that any serious attempts at establishing privacy will cause me to be labled and extremist.

I find this ironic, because, as a patriot, it immediately occurs to me that enemies of America could use these lapses in privacy for their own ends. With the existence of Snowden, one cannot seriously suggest that the NSA's methods will only be available to American Intelligence...

SubG
Aug 19, 2004

It's a hard world for little things.

Aleksei Vasiliev posted:

Google's view is that if you sent an email to a user of Google Mail, you should not be surprised if Google's servers look at it. This is accurate. The alternative is making spamblocking illegal unless it's clientside and run by the recipient (inherently less accurate than serverside).
Google's view is that they can algorithmically analyse the content of anything you put on google's servers, including email. Which is more or less precisely what we're talking about MonsterMind doing:

google's terms of service posted:

Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored.

SubG
Aug 19, 2004

It's a hard world for little things.

Snak posted:

And that's something I respect. If I use a service and I agree to some terms of service and don't read the closely enough, I don't really have a right to say my privacy is being violated. In that case, what happened was I didn't undestand my privacy, and that's on me. However, if I were to create software that actually facilitated my privacy, there is a strong indication that the current NSA behavior would not respect my legitimate privacy and would use any means necisarry to countermand it. The is a strong likelyhood that any serious attempts at establishing privacy will cause me to be labled and extremist.
This appears to be another fairly narrow statutory or regulatory reading of privacy; your argument appears to be that all the US needs to do to prevent the NSA from invading your privacy is not to change the behaviour of the NSA but to change the US's terms of service.

Beyond that, I'd just reiterate what I said in replying to tentative8e8op: I'd find the `you don't have to accept the TOS' argument more compelling if there was a practical way an individual could opt out of such things en masse without becoming a privacy survivalist.

It is also worth pointing out that Google makes several billions of dollars in profit each quarter---that is profit, not revenue. I take this to mean that either Google's users are selling access to their personal data for substantially less than market value, their personal data is being used in ways that they do not fully appreciate, or both. That is, even if we accept that there's an exchange of value being transacted here, it is a palpably unequal exchange.

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

Or they target for compromise the individuals for whom they can't obtain information via other methods. Again, this is literally something which we know they have both the methods and inclination to do per the Snowden disclosures. This is a thing that they are apparently capable of, and in fact have done, at scale and on an effectively automated (e.g. click-to-compromise) basis with very high success rates.

I mean if you want to imagine some cool cyberpunk future where everyone encrypts everything than yeah, the exact methods and programmes that the NSA has in place wouldn't be effective. But that's not the world we're living in. In the real world what we know is that using encryption is something that the NSA will use as evidence, in and of itself, for targeting the individual for additional surveillance. If everyone everywhere used encryption for everything at all times this policy wouldn't work. But it isn't as if they'd just throw up their hands and say `well, they won'. They'd use the other technologies and methods they've already developed for dealing with this, and they'd presumably develop new ones.
That's why Snowden and the like have been suggesting the solution is systems that encrypt by default in a human-usable way. Using encryption on such a service doesn't set you apart, and either forces the NSA to use mass exploits or try to find targets on some other service.

I don't see why such a system is in principle impossible, we just haven't (until now) had particularly strong incentives to build one.

quote:

So in purely practical terms `just use encryption' doesn't work because we know they've got a gameplan for that, and in ideological (or however you want to say it) terms it doesn't work because a universal shift toward encryption would just produce a universal shift in surveillance technology just like every other drat advance in communication technology. Which has in the end, every time, resulted in greater capacity for information collection and surveillance.

I mean I'm not saying don't use encryption. Do. But, as I said before, the threat model using encryption protects you from is your neighbor's kid who's good with computers, not the loving NSA.
The threat to protect us from isn't targeted surveillance, it's mass surveillance. That's why we need mass encryption systems.

SubG
Aug 19, 2004

It's a hard world for little things.

ShadowHawk posted:

That's why Snowden and the like have been suggesting the solution is systems that encrypt by default in a human-usable way. Using encryption on such a service doesn't set you apart, and either forces the NSA to use mass exploits or try to find targets on some other service.

I don't see why such a system is in principle impossible, we just haven't (until now) had particularly strong incentives to build one.

The threat to protect us from isn't targeted surveillance, it's mass surveillance. That's why we need mass encryption systems.
I understand that this is how you feel, but you seem to be either ignoring or not addressing the points I've made about this.

Right now today all shrinkware encryption will accomplish (in re NSA surveillance) is perhaps shifting you from your notional mass surveillance column on the ledger to the targeted surveillance column. This poses no operational obstacle to the NSA for any reasonably imaginable rate of adoption of shrinkware encryption. If we wish to imagine some wild cyberpunk future where shrinkware encryption is used by everyone everywhere for everything then the NSA will use the other methods at its disposal and, presumably, develop new ones. All of which can, and per the Snowden disclosures, already are being done at scale and in an automated fashion. Additionally, while ubiquitous encryption may offer some protection against eavesdropping in the traditional sense, it does not pose an obstacle to broader surveillance and so encryption is not, and cannot, be a remedy to mass surveillance and it is an error to presume it can. Beyond that, encryption systems are complex and brittle and all of the incidental fuckups can be exploited---again at scale and in an automated fashion---by an adversary like the NSA. So ubiquitous shrinkware encryption is almost certainly broken due to implementation fuckups even if it isn't compromised by active subversion via some other means. And finally while you and I might care about this, people in aggregate do not. Information security cannot, on a mass scale, convince people not to click on bullshit email attachments and cannot, on a mass scale, convince software companies to produce software that prevents or prohibits this sort of silly behaviour. More than a quarter of the world is still using XP. Draw me the line that connects this present to your magnificent cryptofuture.

And leaving all that aside, if the NSA and other LEAs found themselves unable to get access to this kind of data due to technical difficulties, there's every indication that you'd just see broadening of requirements under e.g. CALEA to mandate government access to this sort of data.

And in addition to all of that the fact remains that all of the data is still out there in the hands of the telcos, places like google, and so on. And, as I've said, if we look at this as a privacy issue and not a gently caress the government issue then the problem would remain even if the NSA spontaneously turned into Mother Jones overnight. And, absent that kind of thing happening, if the data is there then I think it's a little naive to believe that a sufficiently motivated and capable organisation won't be able to get access to it one way or another. If less than a dozen 20-year-old hackers in Russia can do it, who should we imagine can't?

Winkle-Daddy
Mar 10, 2007
Your entire point rests on conflating mass surveillance with targeted surveillance. Mass surveillance is cheap because no one gives a gently caress about their online footprint the way they should. If data was properly treated by it's custodians (I'm defining properly treated as proper end to end encryption here in a generic sense) then mass surveillance would become impractical. It would likely mean more individuals being targeted, but the idea that NSA would simply find another, just as effective method as slurping up all the traffic they can seems pretty far fetched.

Even the examples of H/W tampering we've seen have been rather limited in scope (when compared to mass surveillance).

E: not that this can actually be fixed with "shrink wrapped" encryption solutions because "good" ones (luks, gpg/pgp, etc) are a gigantic pain in the dick for the average user and the ones targeted at the masses are written by bumbling retards (see Nadim of cryptocat). But as far as I can tell making good solutions less of a pain in the dick is a good way forward for now.

Winkle-Daddy fucked around with this message at 00:25 on Aug 16, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Winkle-Daddy posted:

Your entire point rests on conflating mass surveillance with targeted surveillance.
No. Or at least I don't think so. More or less my entire previous post was devoted to itemising why this isn't so. Since you declined to address any of it I don't know if I'm being unclear or if you're just not reading it.

So I'll put it another way. The MonsterMind disclosure is what got us talking again. It is predicated on the capability to identify and respond to a thread the size and shape of e.g. the Chinese cyberwarfare systems. If the NSA is capable of identifying and responding to something of that scope in a hands-off fashion, then I find it difficult to believe that Winkle-Daddy encrypting his email will stymie them, regardless of whether it ends up requiring them to shift from using purely passive `mass surveillance' methods to automated `targeted surveillance' methods. In other words I disbelieve you can scale the number of individuals who give a poo poo and are educated/motivated enough to get it right better than the NSA can scale their capabilities.

And that's leaving aside all the other poo poo I've already gone over---the possibility of compelling the use of surveillance-friendly technologies, the fact that we already know they have automated and large-scale capabilities to compromise the devices running the cryptographic software, the fact that the surveillance is going on whether or not the NSA has access to the data, and so on.

And, oh yeah, once again encryption doesn't prevent surveillance.

JeffersonClay
Jun 17, 2003

by R. Guyovich
Surveillance by corporations in the free market is less onerous than surveillance by the state. That's the argument, right?

SubG
Aug 19, 2004

It's a hard world for little things.

JeffersonClay posted:

Surveillance by corporations in the free market is less onerous than surveillance by the state. That's the argument, right?
It seems to be the implication although nobody actually seems willing to commit to it overtly.

It is certainly the case that there may be different legal implications depending on whether the entity doing the surveillance is public or private, but it isn't clear to me how this makes a difference to the privacy of the individual. And we find ourselves having to do even more elaborate tap-dancing around the issue when we consider poo poo like PRISM.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

JeffersonClay posted:

Surveillance by corporations in the free market is less onerous than surveillance by the state. That's the argument, right?

Google can't arrest you.

SubG
Aug 19, 2004

It's a hard world for little things.

Thanqol posted:

Google can't arrest you.
They can certainly analyse your data and turn the data over to the authorities who can arrest you though.

But beyond that, this line of argument appears to be predicated on the idea that there is no harm in the violation of privacy in and of itself. That is, it only matters if there is some other consequence (e.g., if you are arrested as a result of the privacy violation). Is that your position?

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

SubG posted:

They can certainly analyse your data and turn the data over to the authorities who can arrest you though.

But beyond that, this line of argument appears to be predicated on the idea that there is no harm in the violation of privacy in and of itself. That is, it only matters if there is some other consequence (e.g., if you are arrested as a result of the privacy violation). Is that your position?

I honestly can't tell what you are arguing for. You seem to be suggesting everyone just give up and don't bother trying to keep anything private because NSA will magic away any attempts at privacy anyway. Also, people agree to share private information with commercial entities all the time, that doesn't mean it's okay for the government to force itself into my private information. My doctor has my private medical information and I'm okay with that because they are my doctor, that doesn't mean the NSA should have access to that. Same arrangement goes with Google mail running an algorithm against my inbox. I can't see that you are suggesting anything other than that everything is hopeless and to give up and Google does it anyway.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

SubG posted:

They can certainly analyse your data and turn the data over to the authorities who can arrest you though.

But beyond that, this line of argument appears to be predicated on the idea that there is no harm in the violation of privacy in and of itself. That is, it only matters if there is some other consequence (e.g., if you are arrested as a result of the privacy violation). Is that your position?

The position is actually that the government, ideally, represents society's collective self defence against abuses, crime and malicious activity. In order to conduct these functions the government is entrusted with phenomenal power over life and death. In order to ensure that the government wields this enormous power it possesses for the good of society instead of for the advantage of those people entrusted with that power, a system of checks, balances and limitations exists. In the event where those checks, balances and limitations are bypassed then the government truly has nothing stopping it from exercising it's phenomenal power over life and death purely to ensure the continued power and prosperity of the people entrusted with that power. Maybe their moral code keeps them doing the right thing for a while but that's essentially asking you to entrust the power of life and death in society to people you've never met who operate within an organisation that does not respect internal checks and balances.

Meanwhile, a corporation is a group of individuals out not for the collective good of society but, instead, the continued power and prosperity of the people who comprise it. In exchange for being allowed to possess such base motives, society demands that it operate within the rules and regulations devised by the government who is, again, society's method of collective self defence against abuses. The corporation is free to follow any path it pleases to profit so long as it does not cross society, as defined and enforced by the government. Ideally if Google misused people's data the government would correct that abuse, according to it's function.

When the government begins to act like a corporation then it is truly a dire and terrible situation because there is no regulatory or oversight body for the government (except maybe the Supreme Court). This results in an entity that possess A) Power over life and death B) No guidance, regulation or limitation C) Purely selfish motives. It is hard to see how society benefits from such an entity.

E: I'm theoretically fine with mass surveillance, what I'm not fine with is the criminal, hostile and secretive way that the NSA has been going about it, including its willingness to compromise civilian systems for its own advantage.

Thanqol fucked around with this message at 04:28 on Aug 16, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

XK posted:

I honestly can't tell what you are arguing for. You seem to be suggesting everyone just give up and don't bother trying to keep anything private because NSA will magic away any attempts at privacy anyway.
That's not what I'm saying. In fact I've explicitly said that's not what I'm saying. Crypto cheerleaders have argued that the Snowden disclosures mean that we should use crypto to thwart NSA surveillance efforts. My argument, which I won't recapitulate here, is that this is wrong on virtually every level. But, again as I've explicitly said, this doesn't mean that you shouldn't use crypto. Just that crypto isn't a solution to the problems discussed in this thread.

That's really a side-issue to the comments I made that started this particular line of discussion, but I'm not the one that brought it up.

Thanqol posted:

E: I'm theoretically fine with mass surveillance, what I'm not fine with is the criminal, hostile and secretive way that the NSA has been going about it, including its willingness to compromise civilian systems for its own advantage.
Criminal? What criminal activity are you talking about? And are you arguing that if it isn't criminal then it's okay? I'm actually asking here.

Also, you either didn't answer my question or I'm failing to understand your response as an answer to it: do you believe that there's no harm in violation of privacy in and of itself, absent some other overt harm (e.g. getting arrested as a result of the privacy violation)?

As is probably already apparent I consider the violation of privacy to be an inherent harm, just like the violation of any other right; an evaluation of the quantitative level of harm in `real' terms (whatever those might be) is not necessary. I hasten to add that I'm not trying to suggest that quantitative harm is irrelevant, but I don't think that's all that's at stake here.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

SubG posted:

Criminal? What criminal activity are you talking about? And are you arguing that if it isn't criminal then it's okay? I'm actually asking here.

Lying to congress repeatedly comes to mind, which was what did for Snowden in the first place.

quote:

Also, you either didn't answer my question or I'm failing to understand your response as an answer to it: do you believe that there's no harm in violation of privacy in and of itself, absent some other overt harm (e.g. getting arrested as a result of the privacy violation)?

As is probably already apparent I consider the violation of privacy to be an inherent harm, just like the violation of any other right; an evaluation of the quantitative level of harm in `real' terms (whatever those might be) is not necessary. I hasten to add that I'm not trying to suggest that quantitative harm is irrelevant, but I don't think that's all that's at stake here.

I'm not convinced mass surveillance is an inherent harm. I believe that there are ways a surveillance state could be made open, communal and effective. I believe that the flow of data can be harnessed and managed for good as well as evil.

However I think the National Surveillance Agency lacks the intelligence, morality, or strategic direction to be trusted with any of that data. I believe that if their use for mass surveillance is to catch insignificant numbers of terrorists rather than, say, identifying and prosecuting people who talk on mobile phones while driving (American road toll 2001: 42,196 - over 14 times the total terrorism casualties) then they possess a horrifying lack of imagination. The violation of privacy secretly, indiscriminately, for no purpose, in a way that can be easily abused for personal ends is terrible, and that is what the NSA does. The violation of privacy conceptually doesn't have to be any worse than talking to a government census collector - someone bound by clearly defined limits as to what they can do with that data and an unambiguously positive net outcome.

E: Either way, the possibilities of the technology should be a matter of public debate and clearly defined goals, not done invisibly and unaccountably by some amoral and incompetent spy agency.

Thanqol fucked around with this message at 08:02 on Aug 16, 2014

Zand
Jul 9, 2003

~ i'll take you for a ride ~ ride on a meteorite ~

Thanqol posted:

Google can't arrest you.
They can be compelled to provide their data to people that can though.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

Zand posted:

They can be compelled to provide their data to people that can though.

Yes. And in order to be compelled to do that, there has to be a formal request made, preferably with a warrant. If there's no warrant then Google doesn't have to give out poo poo. I do not begrudge law enforcement entities who possess sufficient probable cause to have a judge sign off on a warrant getting whatever data they need. That's fine.

I do begrudge warrantless surveillance on everyone, signed off on by mysterious secret ninja judges who are not part of the regular justice department, and then lied about to the country's governing officials. I begrudge all that time, manpower and data being used to look for terrorists almost as much as I'd begrudge it being used to look for people with six fingers. I begrudge the sheer incompetence that is responsible for all these leaks. The NSA's sin isn't just that it's evil, it's that it's an imbecilic system and a criminal waste of resources.

meristem
Oct 2, 2010
I HAVE THE ETIQUETTE OF STIFF AND THE PERSONALITY OF A GIANT CUNT.

SubG posted:

This should not be surprising, as it is precisely the same path that e.g. random hackers-for-profit have taken; the fact that for example credit card numbers are characteristically encrypted in transport has posed no particular obstacle to the theft of credit card data---they just compromise either the consumer-end by getting malware onto your PC, or compromise the server end and grab the data in bulk. This is not a process that involves a lot of leet hacker skills or whatever the gently caress, it is something that is literally more fire-and-forget than keeping the target machines up to date is. That's the problem.

SubG posted:

So ubiquitous shrinkware encryption is almost certainly broken due to implementation fuckups even if it isn't compromised by active subversion via some other means. And finally while you and I might care about this, people in aggregate do not. Information security cannot, on a mass scale, convince people not to click on bullshit email attachments and cannot, on a mass scale, convince software companies to produce software that prevents or prohibits this sort of silly behaviour. More than a quarter of the world is still using XP. Draw me the line that connects this present to your magnificent cryptofuture.

SubG posted:

And beyond all of that, I'd be willing to bet even people with comparatively sophisticated understanding of privacy issues compared to the general population (like I expect most of the readers of this thread are, in that they've thought about the subject at all) probably don't understand all of the privacy agreements they've made, implicitly or explicitly, with all of the third parties who right now have their data. If I asked you to enumerate all of the personal data you've shared and with whom and what they are permitted to do with this data I'd be surprised if you got half of it. I can't prove this of course. Nobody can. That's kinda the point.

These are some of the paragraphs that hit true with me. It feels like a very practical problem: I was recently changing a phone (Android), and so I had to install a ton of apps. And give out a lot of privacy permissions to these apps. What did I install, exactly? What permissions did I give? I was doing it in bulk, so I honestly don't loving know; after a time, it all started to blur. And then, I read articles like this one. Or this one, which, among other things, links to a report claiming that that nearly half of leading Android apps slurp more types of data than they require. It's all extremely leaky.

So, hey, now I should install a password manager and a privacy guard manager. Oh, and a crypto manager. How much time of my life am I supposed to devote to learning about all this, exactly? I am not a computer scientist, that's not my goddamn career! Meanwhile, my 59-yo father doesn't know that AdBlock exists.

It feels very overwhelming, is all. I don't really think most people will be interested enough to deal with the learning curve.

Incidentally, it makes me want to support Al Qaeda. At least they understand the value of an uncluttered interface.

SubG
Aug 19, 2004

It's a hard world for little things.

Thanqol posted:

Lying to congress repeatedly comes to mind, which was what did for Snowden in the first place.
Can you cite particular examples?

And are you saying that if they didn't lie about it, it's okay? Again, I'm actually asking to try to get a feel for your position here.

Thanqol posted:

I'm not convinced mass surveillance is an inherent harm.
I'm not saying that mass surveillance is an inherent harm, I'm saying that violation of privacy is an inherent harm, just like the violation of any other right. Surveillance may or may not violate privacy, and whether or not it is `mass' surveillance is not a good predictor.

Thanqol posted:

Yes. And in order to be compelled to do that, there has to be a formal request made, preferably with a warrant.
You seem to have a misapprehension about the relationship between information sources like Google and information sinks like the NSA. From what is known in the public record this is not a particularly adversarial relationship.

And you really didn't address my earlier response on this point---the fact that Google can and does analyse user content and voluntarily discloses it to LEAs. The story I linked involves Google automatically scanning images for child pornography and turning in violators. So they presumably have some automated image analysis algorithm that flags content for review by human operators who then look at the content, decide if it's a violation, and then hand the data over to law enforcement if they think it is.

If this does not constitute a privacy violation, then it is difficult to see how a government agency running precisely the same algorithm, having content flagged in the same way, and then having a law enforcement official review the flagged content and possibly acting upon it would be a privacy violation. And to take that position would seem to imply that the proper locus of criminal investigations involving child pornography is not in law enforcement agencies but rather in privately run internet service companies. Which would seem to be a surprising result.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

SubG posted:

Can you cite particular examples?

And are you saying that if they didn't lie about it, it's okay? Again, I'm actually asking to try to get a feel for your position here.

James Clapper lies under oath to congress.

I'm not saying that the lie is the dealbreaker there. The lie is the sign of an organisation utterly out of control and removed from oversight - an organisation that is paid 10.8 billion dollars a year. An organisation that cannot maintain basic internal safeguards to stop someone like Edward Snowden walking out the front door with literally everything. An organisation that relies on self reporting of abuses. An organisation that undermines the integrity of the entire internet. An organisation that spies on allies, subverts civilian infrastructure, and builds star trek style information dominance centres.

And what's the NSA trying to protect us from? The chance of death by terrorist is one in twenty million.

What I'm saying is that the NSA is a bad organisation. The incompetence and waste offends me far more than the thought of having my privacy violated. They suck at their jobs and their assigned task is stupid to begin with. They should be shut down and replaced with something of use to society.


quote:

I'm not saying that mass surveillance is an inherent harm, I'm saying that violation of privacy is an inherent harm, just like the violation of any other right. Surveillance may or may not violate privacy, and whether or not it is `mass' surveillance is not a good predictor.

And in society the government has the ability to violate rights when certain conditions are met, i.e. incarcerating people. That's fine, they should be able to do that, so long as it is subject to sufficient scrutiny and rigour and the benefit to society outweighs the harm and cost.

Privacy violations by the government are pretty low on the list of bad things that can happen to you so I'm not particularly fervent about that. Hell, they learn everything they need to know about you with each tax return you put in. Privacy violations by an unaccountable, wasteful, imbecilic organisation are really really bad because you've got no idea who's going to get that data or what they're going to do with it. Maybe a politician gets it! Maybe a corrupt cop! Maybe China! Who knows!?

quote:

You seem to have a misapprehension about the relationship between information sources like Google and information sinks like the NSA. From what is known in the public record this is not a particularly adversarial relationship.

Oh, I know that. But on the same token, if Joe Sheriff wants to find out what Murdering Lisa sent to her boyfriend the night he was killed he does need a warrant or Google won't give him anything. That's fine. That's a check on the power of the government to spy on it's citizens.

NSA doesn't have that check. That's not fine.

quote:

And you really didn't address my earlier response on this point---the fact that Google can and does analyse user content and voluntarily discloses it to LEAs. The story I linked involves Google automatically scanning images for child pornography and turning in violators. So they presumably have some automated image analysis algorithm that flags content for review by human operators who then look at the content, decide if it's a violation, and then hand the data over to law enforcement if they think it is.

If this does not constitute a privacy violation, then it is difficult to see how a government agency running precisely the same algorithm, having content flagged in the same way, and then having a law enforcement official review the flagged content and possibly acting upon it would be a privacy violation. And to take that position would seem to imply that the proper locus of criminal investigations involving child pornography is not in law enforcement agencies but rather in privately run internet service companies. Which would seem to be a surprising result.

Google doing this is fine. Google is a private entity and it can do what it wants, so long as it doesn't break the law. Google also has its logo everywhere and clearly tells you when you're using its service so if you find that Google is ratting you out to the feds you can go elsewhere. Google's only obligation is to obey the law, that's the deal we made with Google when we let it incorporate in our society. If Google runs amuck then the government will deal with it according to it's function.

If the NSA wants to build a giant monster spybot using taxpayer money we should ask why. If the NSA wants to use that giant spybot not to monitor and prosecute child pornographers, but instead, spy on literally everyone, we should ask why very loudly.

If the NSA comes to America and is like, 'hey, we think we can shut down child pornography forever, but we need to build this giant monster spybot to do it and it'll cost, like, ten billion dollars' then we should actually have a debate as a society if that's the play we want to go with. And that'd be fine.

Basically the system matters. Far more than anything else going on here, the system matters.

Tezzor
Jul 29, 2013
Probation
Can't post for 3 years!
The private/public separation is pretty meaningless when the NSA can just issue general warrants through a rubberstamp court.

itsgotmetoo
Oct 5, 2006

by zen death robot
The private/public distinction also completely ignores how peoples actually interact with their computers in favor of antiquated case law.

SubG
Aug 19, 2004

It's a hard world for little things.
No offense, but if you're trying to portray the NSA as some sort of rogue criminal agency and that's your citation that's some seriously weak poo poo. Because it doesn't suggest any of the activities of the agency were themselves criminal or whatever, you're just insinuating that something the director said in public about a classified programme was misleading. I mean I'm not going to defend either the agency or the director but that's a really loving weak argument.

I get that you've got a huge loving hardon of indignation over the NSA. And you know, cool, you do that. But I think the problem with trying to portray the NSA as some sort of uniquely criminal and evil organisation obscures the real problem, which is that they're not. Given what's known in the public record, the NSA has operated with the knowledge and consent of the executive branch, the Congress, and the courts. If any laws have been broken they have been broken in a manner incidental to the operation of the programmes in question, not central to them. When Google or a telco collects a bunch of customer data and hands it off to the NSA, the NSA isn't breaking any laws. And Google isn't even violating its own terms of service. That, in large part, is the problem. That's my point.

If you're mad at the NSA, that's cool. I'm not trying to talk you out of it. It makes sense to be mad at the NSA. But what I'm saying is that the NSA isn't the heart of the problem, and it's because they aren't some crazy rogue agency doing crazy James Bond villain poo poo. As far as their approach to protecting the privacy of individuals is concerned they're completely typical. If anything, you can at least say that they compromise privacy as a sort of side effect. For a company like Google, it's the entire business model.

treasured8elief
Jul 25, 2011

Salad Prong
Im phone posting atm so I'm sorry if I cant respond as well as I'd like to.

SubG posted:

Unless you're trying to make a narrow statutory/regulatory/Constitutional point here, I think you're missing the larger issue. If the same third party entity ends up in possession of the information from an individual privacy standpoint does it matter if the mechanism of the transfer was a sale, willing compliance with a request from a LEA, compelled compliance with a court order, or some sort of covert black bag super spy poo poo? My position is that, again purely from an individual privacy standpoint, the difference is de minimis.
Yes, it matters a lot to me whether an intelligence agency follows established legal procedures or not. We have constant examples of agencies doing "some sort of covert black bag super spy poo poo", and we've been shown that such data obtained without due process is often used in domestic trials as coming from "anonymous tips" to avoid revealing their source to judges.

Please, do you have a problem with hacking into lawmakers' computers to destroy incriminating information, and point blank lying under oath about such actions? With lawlessly hacking into internal networks of American companies to collect mass customers' information in bulk? With actually implanting people into American companies to "guide" such companies policies? With actually implanting agents into American companies with an intent to take private and internal customer data without going through any courts? Do you not see how these processes are a lot worse than following legal procedures?

Do you not believe any of my examples are happening, or are you okay with them?

SubG posted:

tentative8e8op posted:

I'm sure it'd be simple to break into my home when I'm away at work, would you consider my diaries, journals and sketchpads to be private? I do.
I think this is a bad model. If you're talking about a computer (or other information system) that's physically and logically segregated from the internet than that's one thing. But hook that computer up to the internet and start making data transfers that cross state and national boundaries, doing things like posting to this message board (presumably you don't consider this post, or yours, to be private), then I think the situation is meaningfully different.
I like how you cut out the rest of my paragraph for your response.

tentative8e8op posted:

I'm sure it'd be simple to break into my home when I'm away at work, would you consider my diaries, journals and sketchpads to be private? I do. A nearly closed network, or data at rest on a storage server, being hackable does not mean all information within automatically loses any-and-all manners of privacy. What it means, to me, is that whomever breaks in to obtain or destroy such information is a criminal. Heck, even the Senate Intelligence Committee and Google consider their internal networks to be private.



SubG posted:

In public places you have no general expectation of privacy. If you wear a mask you might end up preventing a random stranger from photographing your face as you walk down the street. But if you're wearing a paper mask and the wind blows it off and that stranger photographs you, de jure you have not had your privacy violated because you had no reasonable expectation of privacy in the first place. If it turns out that there were a bunch of spooks in the bushes waving fans around to help the wind dislodge your mask that might be a separate, again de jure, issue but that doesn't change the fact that, because you were in public, you had no general reasonable expectation of privacy.

It'd end up more like "spooks" physically assaulting me to tear off and ruin my mask, which would be both an invasion of privacy and an illegal assault regardless of if Im in public or not. Where were you trying to go with your weird hypothetical?

SubG posted:

I think we live in a surveillance state and we don't consider it dystopian because we've defined down dystopianism. And I say `we', not `you'; I don't feel particularly oppressed or twitchy or whatever.

But up above you make an implicit argument that there's not a privacy issue with corporate use of your data because you're willingly entering into an agreement with the corporation. I think that's really bunk. It has become effectively impossible to opt out of surveillance. You cannot travel on the roads without being the subject of surveillance of traffic cameras. You cannot travel by air without being subject to having your person searched---possibly by something as invasive as `full body scan'. You cannot use the internet or a phone without having your activities profiled in intricate detail. And so on. In order to avoid this `background' surveillance one would effectively have to become a crazy Luddite recluse. And that's just private poo poo. I don't even know how you'd expect to obtain or hold down a salaried job without leaving a giant surveillance footprint.

And beyond all of that, I'd be willing to bet even people with comparatively sophisticated understanding of privacy issues compared to the general population (like I expect most of the readers of this thread are, in that they've thought about the subject at all) probably don't understand all of the privacy agreements they've made, implicitly or explicitly, with all of the third parties who right now have their data. If I asked you to enumerate all of the personal data you've shared and with whom and what they are permitted to do with this data I'd be surprised if you got half of it. I can't prove this of course. Nobody can. That's kinda the point.

You keep conflating corporate operational information with agencies who feel they've a right to ALL information everywhere regardless of legal procedures or privacy/publicity of such information, and you somehow feel anyone with any issues with this is a "crazy luddite recluse"

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

No offense, but if you're trying to portray the NSA as some sort of rogue criminal agency and that's your citation that's some seriously weak poo poo. Because it doesn't suggest any of the activities of the agency were themselves criminal or whatever, you're just insinuating that something the director said in public about a classified programme was misleading. I mean I'm not going to defend either the agency or the director but that's a really loving weak argument.
Ok, how about the time they illegally put splitters on all the major backbone fiber installations in the country?

You know, the thing Congress had to make retroactively legal?

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

It is impossible to know for sure because of the nature of the problem, but from the publicly known data it appears that the ratio of undetected to detected/reported/remediated exploits could easily be on the order of millions to one. Exploits that deface or delete content or adversely affect performance are far more likely to be observed than ones that do not cause such effects. If a worm or virus took no other action than e.g. replacing an encryption library I suspect the detection rate would be close to zero. And if it was detected it isn't as if there would be a readme.txt that identified the exploit as the work of the NSA or whatever.
Millions to one is the exact point here. There are hundreds of millions of devices the NSA would need to be continuously launching viruses against (and having information from them sent back to them) if the data weren't laying unencrypted at rest in easily accessible locations. That sort of stuff is noticeable, particularly when it involves spontaneous outbound traffic to unknown places.

quote:

If they're monitoring all your email and phone traffic then even if it's encrypted and they can't read any of it or get access to the clear communications via some other means, they still know that she's your mistress.

Email addresses are not, and cannot (with public MTAs) be encrypted. So if everyone you communicate with is using the same mail service (so everything is delivered internally on the company's mail harness and never traverses public networks), and assuming the analyst looking at you has no visibility into their servers, then you might compel the analyst to use other methods. But if any of those presumptions aren't true---e.g. if you're using gmail and your mistress is on yahoo---then the fact that the content is encrypted won't prevent an analyst from being able to extract information about your social network from traffic analysis.
Proper encryption can shield the target of communications as well. Not necessarily with today's email schemes, but concluding that technical solutions can't ever work because email as designed doesn't work that way is a weird sort of defeatism.

quote:

Or they target for compromise the individuals for whom they can't obtain information via other methods. Again, this is literally something which we know they have both the methods and inclination to do per the Snowden disclosures. This is a thing that they are apparently capable of, and in fact have done, at scale and on an effectively automated (e.g. click-to-compromise) basis with very high success rates.

I mean if you want to imagine some cool cyberpunk future where everyone encrypts everything than yeah, the exact methods and programmes that the NSA has in place wouldn't be effective. But that's not the world we're living in. In the real world what we know is that using encryption is something that the NSA will use as evidence, in and of itself, for targeting the individual for additional surveillance. If everyone everywhere used encryption for everything at all times this policy wouldn't work. But it isn't as if they'd just throw up their hands and say `well, they won'. They'd use the other technologies and methods they've already developed for dealing with this, and they'd presumably develop new ones.
I suppose we're agreeing here? My whole point was that we should be moving towards a world of human-usable encryption by default.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ShadowHawk posted:

Millions to one is the exact point here. There are hundreds of millions of devices the NSA would need to be continuously launching viruses against (and having information from them sent back to them) if the data weren't laying unencrypted at rest in easily accessible locations. That sort of stuff is noticeable, particularly when it involves spontaneous outbound traffic to unknown places.

That's not true, though. Implants are a last-resort measure. There's lots of intermediate steps between "scoop everything up unencrypted" and "send everyone a virus or give up".

Data isn't just randomly spread around the internet randomly and equally difficult to compromise in any location, there are high-value targets that let you scoop a lot of communications for a small number of intrusions/implants. For something like SSL you only need to get the key once, and then you can just perform MITM attacks from your installations on the backbone. Your computer sees a valid certificate and there's no way to detect the MITM apart from a little more latency.

If you compromise a Certificate Authority key you wouldn't even need to steal the keys at a server level, you could just sign your own valid keys. That would potentially be detectible, however, since the key would be different from the one actually used by the server.

Paul MaudDib fucked around with this message at 20:08 on Aug 16, 2014

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

Paul MaudDib posted:

That's quite a claim. Even Tor has constant metadata leaks, and it's designed from the ground up for anonymity. That's true even when you're dealing with only things designed from the ground up to operate anonymously (i.e. .onion sites), and once you start transporting another insecure protocol over it there's an even higher chance of leakage.

Sorry, I thought I had sufficiently disclaimed my statement by adding that it was very difficult and involves a lot of tradeoffs. To be clear, I'm not claiming that there are perfectly secure, anonymous alternatives for digital communication. I was simply trying to say that newer communication protocols attempt to protect the metadata too.

SubG posted:

Manifestly false.

In addition to the purely factual error, it is a categorical error to expect encryption to protect against surveillance. Properly designed, implemented, and deployed encryption (so a strict minority of all encryption in general) will provide protection against recovery of the plaintext from the cyphertext. This impinges on the subject of surveillance at the periphery, but that's about the size of it. Encryption is important because it is quantifiable and evaluable, and it is comparatively easy to commodify than many of the other aspects of broader information security. But no, encryption doesn't make mass surveillance impossible. Strongly no. Encryption isn't a superpower.

OK, I think I see where the disconnect is here. I'm trying to argue that mass encryption makes passive surveillance infeasible. I think you're arguing that this is a difference without a distinction, as the NSA can effectively own any and all endpoints for at little to no additional cost versus current activities.

Is that accurate? If so, then that's a point we can debate. Assuming I'm representing your argument correctly, I've highlighted the assertions that I'd like to challenge:

SubG posted:

Right now today all shrinkware encryption will accomplish (in re NSA surveillance) is perhaps shifting you from your notional mass surveillance column on the ledger to the targeted surveillance column. (1) This poses no operational obstacle to the NSA for any reasonably imaginable rate of adoption of shrinkware encryption. If we wish to imagine some wild cyberpunk future where shrinkware encryption is used by everyone everywhere for everything then (2) the NSA will use the other methods at its disposal and, presumably, (3) develop new ones. All of which can, and per the Snowden disclosures, (4) already are being done at scale and in an automated fashion. Additionally, while ubiquitous encryption may offer some protection against eavesdropping in the traditional sense, (5) it does not pose an obstacle to broader surveillance and so encryption is not, and cannot, be a remedy to mass surveillance and it is an error to presume it can. Beyond that, encryption systems are complex and brittle and all of the incidental fuckups can be exploited---again at scale and in an automated fashion---by an adversary like the NSA. So ubiquitous shrinkware encryption is almost certainly broken due to implementation fuckups even if it isn't compromised by active subversion via some other means. And finally while you and I might care about this, people in aggregate do not. Information security cannot, on a mass scale, convince people not to click on bullshit email attachments and cannot, on a mass scale, convince software companies to produce software that prevents or prohibits this sort of silly behaviour. More than a quarter of the world is still using XP. Draw me the line that connects this present to your magnificent cryptofuture.

And leaving all that aside, if the NSA and other LEAs found themselves unable to get access to this kind of data due to technical difficulties, (6) there's every indication that you'd just see broadening of requirements under e.g. CALEA to mandate government access to this sort of data.

1. I believe this is the core of our disagreement. You seem to be arguing that there is no effective difference between passively sucking up all the traffic out there and actively defeating encryption. I do, for the reasons below.

2. From what I can tell, these techniques are reserved for high value targets. Interdiction and black bag jobs are expensive. Yes, they might the NSA might be able to physically compromise tens or perhaps hundreds of thousands of devices, but I see no evidence that they could ever physically compromise the billions of devices that are connected to the Internet. Likewise, the more broadly exploits are deployed, the greater the risk of detection. Either way, the NSA's job just got a lot more difficult.

3. I'm not entirely sure what you mean by this. If you're talking about new exploits, see above. If you're talking about the underlying encryption algorithms, then I go back to Snowden, who as repeatedly claimed that encryption works. There's no evidence the NSA has fundamentally broken the underlying math.

4. What kind of active exploits are being deployed at the same scale as passive surveillance?

5. Could you expand on this?

6. But that wouldn't affect cryptography research in other countries, especially those that are hostile to the United States. With unencrypted data, the NSA can suck up everything from everywhere and figure it out later. When that data is encrypted, the NSA needs to actively target a service and use a patchwork of legal, technological, and physical approaches to get access. The idea is to make the NSA play cryptographic whack-a-mole (keeping in mind that you and I obvious disagree on how effective encryption is in the first place).

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER
The best evidence that the NSA actually worries about encryption is all the underhanded methods they go to trying to sabotage it. You don't need to try and trick the public into adopting a backdoored crypto algorithm if crypto algorithms don't matter.

Thanqol
Feb 15, 2012

because our character has the 'poet' trait, this update shall be told in the format of a rap battle.

SubG posted:

Given what's known in the public record, the NSA has operated with the knowledge and consent of the executive branch, the Congress, and the courts.

Since you disregarded my entire post to nitpick a citation, I'll do the same for you. Would you go ahead and provide some citation for this point? Because it runs totally contrary to my understanding of this story.

SubG
Aug 19, 2004

It's a hard world for little things.

tentative8e8op posted:

Yes, it matters a lot to me whether an intelligence agency follows established legal procedures or not.
You seem to be arguing against a position I haven't taken. I said---it was even bolded in the text you quoted---that from an individual privacy standpoint whether or not procedural niceties were followed seem, to me, to matter little. If there is a specific violation committed in the privacy violation that isn't the problem, it's a separate, if related, problem. Like if someone kidnaps you for an afternoon it doesn't matter, as far as being a kidnapping is concerned, whether you were planning on sitting around all day doing nothing in particular or if you were planning on balling strippers all day. I mean you might care more in one case or the other, but it isn't like you were more kidnapped in one scenario over the other.

My point is that the individual privacy violation is important in and of itself. And, as this discussion appears to illustrate, it's one that is getting deprecated in favour of outrage over the presumed depravity of the means by which the privacy violations were accomplished.

tentative8e8op posted:

Please, do you have a problem with hacking into lawmakers' computers to destroy incriminating information, and point blank lying under oath about such actions? With lawlessly hacking into internal networks of American companies to collect mass customers' information in bulk? With actually implanting people into American companies to "guide" such companies policies? With actually implanting agents into American companies with an intent to take private and internal customer data without going through any courts? Do you not see how these processes are a lot worse than following legal procedures?

Do you not believe any of my examples are happening, or are you okay with them?

ShadowHawk posted:

Ok, how about the time they illegally put splitters on all the major backbone fiber installations in the country?

You know, the thing Congress had to make retroactively legal?
I'm putting these together because I think I can address them in a single response.

First, tentative8e8op, I believe that the first example you give is actually the CIA, not the NSA. And ShadowHack, I take your reference to be to Room 641A.

I think both of these illustrate my point. Which is not that these things are okay. Not that I approve of them. But if you look at these you don't conclude that the NSA is some kind of crazy outlaw agency. In one case because it's a completely different agency engaged in the same kind of activity, and in the other because the parties involved were given retroactive immunity by a literal act of Congress and the litigation against them as a result dismissed by the courts.

The point being here not (as you seem to be assuming I'm saying) that there's no problem or that everything the NSA does is great or whatever, but rather than the problem is bigger than the NSA. So whatever outrage you feel toward the NSA, that's cool. As I've already said, I'm not trying to talk you out of it. But the problem isn't that the NSA is a crazy rogue agency, it's that it isn't.

I mean name all the indictments involving people connected to the NSA following the Snowden disclosures. I can think of I think three off the top of my head, and they're all whisleblowers/leakers. What does that tell you? Again, I'm not endorsing this state of affairs. I'm just observing that that it is the state of affairs.

tentative8e8op posted:

I like how you cut out the rest of my paragraph for your response.
I don't know what you're insinuating. I included the part which seemed to be your thesis because I was going to respond to it. I didn't include the rest of your argument but that doesn't mean I didn't read it or whatever. My point it is de facto the case that communications on the internet are, as a rule, not private. If you write something in a physical diary and keep it under your pillow and never let anyone see it, you have a practical expectation of privacy involving the contents of the diary. You put the same comments in a document stored on google or sent through email and you do not---or at least should not---have the same practical expectation of privacy.

You point---that someone who engages in illegal activity to obtain the information is engaging in illegal activity---doesn't have any bearing on this basic underlying reality.

tentative8e8op posted:

You keep conflating corporate operational information with agencies who feel they've a right to ALL information everywhere regardless of legal procedures or privacy/publicity of such information, and you somehow feel anyone with any issues with this is a "crazy luddite recluse"
No, that is not what I have said. It isn't even a very plausible-sounding misreading of what I've said. My argument is that a violation of individual privacy is a violation of individual privacy regardless of whether the violation is incident to surveillance conducted by a government agency motivated by ostensible national security interests or by a corporate entity motivated by profit. I have not called anyone who objects to this characterisation a `crazy luddite[sic] recluse'. I have said that these sorts of individual privacy violations are so omnipresent that if one wished to avoid them all one would have to become a Luddite privacy survivalist.

ShadowHawk posted:

Millions to one is the exact point here. There are hundreds of millions of devices the NSA would need to be continuously launching viruses against (and having information from them sent back to them) if the data weren't laying unencrypted at rest in easily accessible locations. That sort of stuff is noticeable, particularly when it involves spontaneous outbound traffic to unknown places.
Paul MaudDib already responded to part of this, so I'll just amplify. Per the public record, traffic analysis (`metadata analysis' in general media reporting) is used on as broad a dataset as can be acquired. This is used to identify individual targets for e.g. cleartext interception. The number of communications targeted in this way is large in absolute terms but is small compared to the number of calls/emails/whatever the gently caress in the original dataset. It is only this latter set where encryption of the communication channel will be relevant at all, and targeting this number of machines for click-to-compromise subversion is rounding error compared to the NSA's resources. It is rounding error compared to the resources of Russian hacker groups lacking the backing of any state actor, for that matter.

Also, there's no reason why the compromised machines would start spontaneously sending outbound traffic to unknown destinations. The compromised machines would continue sending outbound traffic to, for example, LinkedIn, only instead of actually taking to LinkedIn they're talking to the NSA (I use LinkedIn as an example because this is what was apparently in fact done to Belgian cryptographer Jean-Jacques Quisquater).

Kobayashi posted:

Sorry, I thought I had sufficiently disclaimed my statement by adding that it was very difficult and involves a lot of tradeoffs. To be clear, I'm not claiming that there are perfectly secure, anonymous alternatives for digital communication. I was simply trying to say that newer communication protocols attempt to protect the metadata too.
`Metadata' here is semantically equivalent to `routing information', and on a public network like the internet routing information needs to be public or the traffic can't be routed. Any solution to this fundamental architectural problem is necessarily massively less efficient (in terms of metrics like latency, bandwidth, and so on) and therefore could not even in principle be used for the vast majority of internet communications. This means that any solution, quote unquote, to the problem could only be used for communications where the security of the communication is worth the additional overhead. And that means that any use of the system is basically waving your hand and saying `this is the poo poo that needs intercepting!' And if you have something that you're trying to keep secret the most important thing you can do is avoid disclosing that you've got a secret in the first place.

Kobayashi posted:

OK, I think I see where the disconnect is here. I'm trying to argue that mass encryption makes passive surveillance infeasible.
Mass encryption doesn't make your phone records opaque, just the contents of your phone conversations. If you don't see how this is relevant you don't understand the problem.

Kobayashi posted:

2. From what I can tell, these techniques are reserved for high value targets. Interdiction and black bag jobs are expensive. Yes, they might the NSA might be able to physically compromise tens or perhaps hundreds of thousands of devices, but I see no evidence that they could ever physically compromise the billions of devices that are connected to the Internet.
I can see no evidence that they would ever need to physically compromise billions of devices.

The NSA collects billions if not trillions of `metadata' records---so-and-so called or emailed or whatever the gently caress such-and-such and the call lasted so long or the email was this many bytes or whatever. From all of these they construct a very large, very elaborate graph in which endpoints are individuals (or maybe organisations) and the edges are individual communications. Just doing this gets them the structure of social networks. Adding in information about timing, direction and volume of traffic, and so on and gets them behaviour.

Using this analysis---all of which used data for which encryption never even entered into the picture---they identify the individual communications that they care enough about to even consider looking at the content. On the order of millions is probably an upper bound for this, and it wouldn't surprise me if it was substantially lower---independent of how we characterise their motivations, they have significant incentive to use quicker, less expensive processes to filter results so that the most expensive analysis is only done on the data that is most likely to yield actionable content (this, incidentally, is a general problem in data anaysis and is believed to be the reason Nigerian email scams are so unbelievable---so the scammers don't waste their time on anyone who's going to catch a clue and run off before they can be fleeced).

Of these targets selected for additional analysis or data collection, only a minority will be using encryption or whatever and so require compromising the device to facilitate the data collection. But per the Snowden disclosures this is still very much automated poo poo that is amenable to being done in bulk.

And then finally some small minority of the latter group of targets will also be resistant to automated compromise, and those will require additional effort. It is at the very bottom tier that we see poo poo like hardware modification. It is worth noting that this is something common enough that they've got the internal equivalent of off-the-shelf parts and processes for this kind of thing, but it's still used a vanishingly small number of cases compared to the overall scale of the surveillance operations.

Kobayashi posted:

3. I'm not entirely sure what you mean by this. If you're talking about new exploits, see above. If you're talking about the underlying encryption algorithms, then I go back to Snowden, who as repeatedly claimed that encryption works. There's no evidence the NSA has fundamentally broken the underlying math.
Whether or not encryption works as encryption is, as I've explained at some length on multiple occasions, simply not the question.

Beyond that, I don't think it would be at all surprising to discover that the NSA hasn't been devoting resources toward compromising cryptography because they don't have to. Adoption in general is low, and if we believe the disclosures they're getting plenty of data via other channels.

That being said, as of 2014 the majority of websites are still using certificates signed with SHA-1, which is known to be vulnerable to various kinds of collision attacks. Given that the NSA designed SHA-1 I don't think it strains credulity to believe that the NSA has long known a method for producing SHA-1 collisions at will and so have no difficulty e.g. inserting themselves into SSL channels or circumventing code signing mechanisms that rely on SHA-1. If this is true, then they really don't need anything else.

Thanqol posted:

Since you disregarded my entire post to nitpick a citation, I'll do the same for you. Would you go ahead and provide some citation for this point? Because it runs totally contrary to my understanding of this story.
I don't think it's a nitpick. You specifically characterised the NSA's behaviour as criminal. I asked for a citation. Twice, actually. You offered precisely one citation, so I responded to that one citation.

The rest of your comments seemed to be primarily rhetorical, and I really don't care about that kind of thing. Here's one of your paragraphs that I didn't bother responding specifically to:

Thanqol posted:

What I'm saying is that the NSA is a bad organisation. The incompetence and waste offends me far more than the thought of having my privacy violated. They suck at their jobs and their assigned task is stupid to begin with. They should be shut down and replaced with something of use to society.
Okay. You really don't like the NSA. I'm cool with that. Really. You also say that the fact that they're inefficient bothers you more than the fact that they're violating people's privacy. I think that's bonkers (if the NSA was fantastically efficient and competent at violating your privacy you'd be happier? really?), but hey whatever. I think I've made my position on the matter clear elsewhere.

If you want me to talk more about the problem being broader than the NSA, cool. I'll bring up some disclosures that I assume you're unfamiliar with or you would have already mentioned them---the FISC documents the EFF managed to get released where the FISC complains, at length and in detail, of substantial problems in the NSA's conduct both with the court and in conducting surveillance. Which at first blush looks a lot like a smoking gun if you're trying to paint a picture of the NSA as a rogue agency or whatever.

But at the same time the FISC is apparently completely aware of these problems, they're simultaneously approving literally every FISA request made by the NSA (this is a PDF of a report on the FISA request activity submitted to the Senate Majority Leader). And whatever points you might want to make about the FISC being the tiniest possible fig leaf protecting the intelligence community's shame, at the very least it is clear that the FISC could express its apparent disapproval of the NSA's activities by, you know, denying FISA requests, or revoking approval of requests in which they (the FISC) suspect wrongdoing on the NSA's part. But none of that appears to have happened. If that doesn't imply that the NSA was operating with the knowledge and consent of the FISC, what would?

And I'll go out of my way to point out that I'm not saying that this doesn't bother me. I'm not saying I approve of this. I'm saying that trying to portray the NSA as a rogue agency fundamentally misses the real problem by grossly underestimating its scope. If I point to the President calling PRISM activities `modest encroachments of privacy', Congress granting retroactive immunity to those involved in Room 641A, or the FISC rubberstamping NSA surveillance requests despite knowing about violations I'm not saying that this all makes it okay. I'm saying that the NSA isn't the problem, it's a symptom.

Adbot
ADBOT LOVES YOU

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

That's a lot of words, but it's been an interesting debate, and I'm still trying to decipher why you and I are so far apart on this. I'll hope you'll allow me to try to paraphrase your argument as I understand it. I think what you're trying to say is that the protection of the content doesn't matter because the metadata is all that really matters. It's the metadata that identifies you, classifies you, and targets you for more intense scrutiny. Is that roughly accurate? If so, then I want to concede that metadata leakage is an enormous problem, and one that is poorly understood in popular culture. I don't want at all want to minimize how devastatingly important metadata is. With that said...

SubG posted:

`Metadata' here is semantically equivalent to `routing information', and on a public network like the internet routing information needs to be public or the traffic can't be routed.

I cannot agree with this statement. Even within the context of communication channels, there's more to metadata than simple routing information. For example, one of issues that has come from the Snowden revelations is that the NSA considers email subject lines to be metadata, which is absolutely irrelevant to routing. If we go beyond communication, then all the metadata associated with photos (e.g. location data) is irrelevant to routing. Search keywords, also argued to be "just metadata," are not relevant to routing.

SubG posted:

The NSA collects billions if not trillions of `metadata' records---so-and-so called or emailed or whatever the gently caress such-and-such and the call lasted so long or the email was this many bytes or whatever. From all of these they construct a very large, very elaborate graph in which endpoints are individuals (or maybe organisations) and the edges are individual communications. Just doing this gets them the structure of social networks. Adding in information about timing, direction and volume of traffic, and so on and gets them behaviour.

Using this analysis---all of which used data for which encryption never even entered into the picture---they identify the individual communications that they care enough about to even consider looking at the content. On the order of millions is probably an upper bound for this, and it wouldn't surprise me if it was substantially lower---independent of how we characterise their motivations, they have significant incentive to use quicker, less expensive processes to filter results so that the most expensive analysis is only done on the data that is most likely to yield actionable content (this, incidentally, is a general problem in data anaysis and is believed to be the reason Nigerian email scams are so unbelievable---so the scammers don't waste their time on anyone who's going to catch a clue and run off before they can be fleeced).

I think this is an area where we disagree. It sounds like you're saying it (always?) goes 1) analyze metadata, 2) target, 3) look at content and/or compromise device for further surveillance. I contend that both content and metadata are used, simultaneously, for targeting. That is, after Boston, I imagine some NSA analyst fired up Prism and asked it to find anyone in the Boston area (metadata) who emailed or chatted about "backpack bombs" (content) within the last six months (metadata). Mass encryption means the NSA can't go on reactive fishing trips like that. It means the terrorist fusion center in Ferguson can't go trawling through Michael Brown's digital life to find more fodder for character assassination.

SubG posted:

Any solution to this fundamental architectural problem is necessarily massively less efficient (in terms of metrics like latency, bandwidth, and so on) and therefore could not even in principle be used for the vast majority of internet communications. This means that any solution, quote unquote, to the problem could only be used for communications where the security of the communication is worth the additional overhead. And that means that any use of the system is basically waving your hand and saying `this is the poo poo that needs intercepting!' And if you have something that you're trying to keep secret the most important thing you can do is avoid disclosing that you've got a secret in the first place.

Inefficient, perhaps, but not impossible. I have faith that the solutions will get faster and easier to use, because people I trust are working on finding solutions to all the various architectural problems you outline. I realize that's not much of an argument if you believe the foundation is fundamentally flawed, though.

Kobayashi fucked around with this message at 03:00 on Aug 19, 2014

  • Locked thread