Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

First, tentative8e8op, I believe that the first example you give is actually the CIA, not the NSA. And ShadowHack, I take your reference to be to Room 641A.

I think both of these illustrate my point. Which is not that these things are okay. Not that I approve of them. But if you look at these you don't conclude that the NSA is some kind of crazy outlaw agency. In one case because it's a completely different agency engaged in the same kind of activity, and in the other because the parties involved were given retroactive immunity by a literal act of Congress and the litigation against them as a result dismissed by the courts.
If the NSA wasn't a crazy outlaw agency, they wouldn't need a (likely mislead) Congress to come in and pardon them for their original crimes. The NSA is pretty clearly an organization that does whatever it wants and then finds (dubious) legal justification after the fact. This isn't even bringing up the point that the Constitution itself is law; no one seriously expects the agency's actions to withstand judicial scrutiny once it actually ends up in court.

quote:

The point being here not (as you seem to be assuming I'm saying) that there's no problem or that everything the NSA does is great or whatever, but rather than the problem is bigger than the NSA. So whatever outrage you feel toward the NSA, that's cool. As I've already said, I'm not trying to talk you out of it. But the problem isn't that the NSA is a crazy rogue agency, it's that it isn't.

I mean name all the indictments involving people connected to the NSA following the Snowden disclosures. I can think of I think three off the top of my head, and they're all whisleblowers/leakers. What does that tell you? Again, I'm not endorsing this state of affairs. I'm just observing that that it is the state of affairs.
What it tells me are that other parts of the executive branch are also crazy outlaws, and that a few key members of the intelligence committee are neglecting their oversight duties. The rest of Congress most likely still doesn't grasp how bad it's got, or don't feel enough pressure from the public. Or maybe relevant members of Congress are targets of blackmail by the intelligence agencies themselves -- it's happened before.

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

I think what you're trying to say is that the protection of the content doesn't matter because the metadata is all that really matters.
No. I'm not saying that protecting the content doesn't matter or that `metadata' (however we define it) is all that matters. I'm saying that acquisition of metadata---and specifically routing information (who is talking to whom), and timing and volume information---constitutes surveillance.

Kobayashi posted:

I cannot agree with this statement. Even within the context of communication channels, there's more to metadata than simple routing information.
However we wish to define `metadata', the term will always encompass routing information and on a public network routing information will always be public.

I mean your other examples---search terms and subject lines---are also information that is disclosed to third parties (you can't search without disclosing the search term to the search engine, and the search engine will as a general matter disclose the search term to any site linked to from search results).

But this has nothing to do with the point I'm making.

Kobayashi posted:

I think this is an area where we disagree. It sounds like you're saying it (always?) goes 1) analyze metadata, 2) target, 3) look at content and/or compromise device for further surveillance. I contend that both content and metadata are used, simultaneously, for targeting. That is, after Boston, I imagine some NSA analyst fired up Prism and asked it to find anyone in the Boston area (metadata) who emailed or chatted about "backpack bombs" (content) within the last six months (metadata). Mass encryption means the NSA can't go on reactive fishing trips like that. It means the terrorist fusion center in Ferguson can't go trawling through Michael Brown's digital life to find more fodder for character assassination.
You appear to be confusing the function of the PRISM web UI with the behaviour of the programme as a whole and all of the `product lines' associated with it. And you seem to be operating under a misunderstanding of where most of the data is coming from.

In simple terms the guy behind the PRISM UI is an information sink. All the stuff that provides something for him to look at are the information sources. The behaviour of the information sink is, in this context, a lot less interesting than all of the machinery of the data sources, which is where all of the questions of what data is collected, how much, how it is collected, and so on are decided.

In the case of something like PRISM mass encryption doesn't matter because regardless of whether or not a given request made by an analyst behind the web UI results in a search of existing data (`stored comms' in the parlance of the Snowden PRISM slides) or in new data collection (`surveillance' in the slides) most of the data is coming, via the FBI's DITU, from corporate data sources (`Providers (Google, Yahoo, etc.)' on the `PRISM Tasking Process' slide), to whom the content would be available anyway (that is, you can't encrypt your google searches to protect them from google).

ShadowHawk posted:

If the NSA wasn't a crazy outlaw agency, they wouldn't need a (likely mislead) Congress to come in and pardon them for their original crimes. The NSA is pretty clearly an organization that does whatever it wants and then finds (dubious) legal justification after the fact. This isn't even bringing up the point that the Constitution itself is law; no one seriously expects the agency's actions to withstand judicial scrutiny once it actually ends up in court.
You seem to be arguing that the NSA is an outlaw agency because you think someone ought to declare their actions illegal and you're pretty sure that it might happen someday. That's cool. I'd like it if that happened. But it hasn't happened yet; if there are a bunch of indictments or whatever that I've missed, feel free to point them out.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

No. I'm not saying that protecting the content doesn't matter or that `metadata' (however we define it) is all that matters. I'm saying that acquisition of metadata---and specifically routing information (who is talking to whom), and timing and volume information---constitutes surveillance.

However we wish to define `metadata', the term will always encompass routing information and on a public network routing information will always be public.

OK, I don't disagree, but there's a real, substantive difference between metadata "encompassing" routing information and metadata being "semantically equivalent" (your words) to routing information. Furthermore, I don't agree that routing information must necessarily "always be public," and I thought you said as much in your last reply:

SubG posted:

`Metadata' here is semantically equivalent to `routing information', and on a public network like the internet routing information needs to be public or the traffic can't be routed. Any solution to this fundamental architectural problem is necessarily massively less efficient (in terms of metrics like latency, bandwidth, and so on) and therefore could not even in principle be used for the vast majority of internet communications. This means that any solution, quote unquote, to the problem could only be used for communications where the security of the communication is worth the additional overhead. And that means that any use of the system is basically waving your hand and saying `this is the poo poo that needs intercepting!' And if you have something that you're trying to keep secret the most important thing you can do is avoid disclosing that you've got a secret in the first place.

"Efficiency" (along any of the axes you list) is something that can be improved upon. I'm assuming good faith, so if this isn't what you meant that I'd be happy to read your clarification.

quote:

I mean your other examples---search terms and subject lines---are also information that is disclosed to third parties (you can't search without disclosing the search term to the search engine, and the search engine will as a general matter disclose the search term to any site linked to from search results).

But this has nothing to do with the point I'm making.

You appear to be confusing the function of the PRISM web UI with the behaviour of the programme as a whole and all of the `product lines' associated with it. And you seem to be operating under a misunderstanding of where most of the data is coming from.

In simple terms the guy behind the PRISM UI is an information sink. All the stuff that provides something for him to look at are the information sources. The behaviour of the information sink is, in this context, a lot less interesting than all of the machinery of the data sources, which is where all of the questions of what data is collected, how much, how it is collected, and so on are decided.

In the case of something like PRISM mass encryption doesn't matter because regardless of whether or not a given request made by an analyst behind the web UI results in a search of existing data (`stored comms' in the parlance of the Snowden PRISM slides) or in new data collection (`surveillance' in the slides) most of the data is coming, via the FBI's DITU, from corporate data sources (`Providers (Google, Yahoo, etc.)' on the `PRISM Tasking Process' slide), to whom the content would be available anyway (that is, you can't encrypt your google searches to protect them from google).

I'll be honest, I don't know what point you're trying to make. If it's that Prism was a bad example because its data comes from forced compliance, then sure, maybe I should have used Xkeyscore instead. Either way, there's nothing about the nature of the Prism program that undermines mass encryption per se, as I'm sure the Chinese would tell the NSA to gently caress off if it tried to compel access to an state-sponsored encrypted communication service. In that case, the NSA would have no choice but to actively compromise the service, either physically, or through the use of a software exploit of some kind. With that in mind, I come back to my original point, which is that mass encryption is a worthwhile goal because it substantially raises the level of effort required for mass (that's "mass," not "targeted") surveillance.

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

You seem to be arguing that the NSA is an outlaw agency because you think someone ought to declare their actions illegal and you're pretty sure that it might happen someday. That's cool. I'd like it if that happened. But it hasn't happened yet; if there are a bunch of indictments or whatever that I've missed, feel free to point them out.
Yes. I think they're an outlaw agency because they're breaking the law.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
SubG, can you define for us what behavior would cause you to apply the label "outlaw agency"?

Elotana
Dec 12, 2003

and i'm putting it all on the goddamn expense account

SubG posted:

But beyond that, this line of argument appears to be predicated on the idea that there is no harm in the violation of privacy in and of itself. That is, it only matters if there is some other consequence (e.g., if you are arrested as a result of the privacy violation). Is that your position?
I think most people would agree that there is much less harm, although not no harm. In criminal law, privacy violations (such as stalking, for instance) are treated as inchoate offenses that are mostly bad because they make more serious harms such as blackmail and violence much easier. The same applies here. If our communications were all being monitored by angels, nobody would care in and of themselves.

Since they're not, however, we also have to take into account the capacity for harm that a person monitoring possesses. A human at the FBI has a lot more potential to ruin your day than a human at Google (in fact, the most destructive thing a human at Google can do is probably to simply refer your poo poo to the human at the FBI). This applies equally to MonsterMind and Google's analysis algorithms since those are ultimately being programmed and tweaked by humans. You can also switch providers. The choices are limited, but they exist, and as far as I know, Google is not actively infiltrating and mass-collecting Yahoo data centers. The NSA, however, cannot be evaded except by very high-tech crypto or very low-tech avoidance of all modern infrastructure.

Incredulous Dylan
Oct 22, 2004

Fun Shoe
I am much more concerned about a government with a military (and increasingly militarized police force) intercepting all of my communications than Google. Not that I support Google having their fingers in so many things but the two organizations just aren't alike at all.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

Furthermore, I don't agree that routing information must necessarily "always be public," and I thought you said as much in your last reply:

"Efficiency" (along any of the axes you list) is something that can be improved upon. I'm assuming good faith, so if this isn't what you meant that I'd be happy to read your clarification.
No on both counts. Routing information is necessarily public on a public network. Attempts to rectify this problem involve e.g. routing every message to every node (and presuming only the intended recipient will be able to read it). And when I say such protocols are less efficient I mean they are strictly less efficient than conventional communication channels.

Say there are a bunch of people sitting at a table in a restaurant. Alice wishes to send a message to Bob. So she writes a note and passes it to Bob. Or, as is more common in internet communications, she hands the note to whoever happens to be seated next to her and it gets passed around the table until it reaches Bob. In this case it is public knowledge that Alice is talking to Bob. Even if Alice encrypted the message in such a way that only Bob can read it, if Bob, upon receipt of the note, immediately shoots the waiter, people are likely to infer that Alice's message might have something to do with Bob's actions. Especially if this happens more than once.

Now say that Alice wants to conceal the fact that she's talking to Bob. She can write a note to everyone at the table and simultaneously hand them out. Now just looking at the behaviour of the communication channel nobody can tell that she's `really' just talking to Bob. Of course if Bob shoots the waiter, then they'll still be able to figure it out via this additional information. So Alice starts sending notes out to everyone at regular intervals, regardless of whether or not she has anything to tell Bob. So occasionally Bob's activities will appear to correspond to the arrival of Alice's notes, but so will everyone else's activities---that is, there will be no statistical bias between the arrival of one of Alice's notes and any activity by any individual versus any other activity by any other individual. This, incidentally, is the mode of operation of numbers stations.

Of course everyone still knows that Alice is the one doing the talking and Bob has no way of replying without outing himself. But everyone at the table is a diehard cypherpunk and so is willing to do elaborate horseshit for ideological reasons, and so everyone agrees to start sending notes to everyone else at regular intervals. Now there's no way to tell who's really talking to whom. The downside of this is that what started out as the transmission of one message has now exploded into the transmission of n2 messages, where n is the number of people at the table.

Alternately, everyone could agree to post their public keys on the special of the day board (which everyone at the table can see). Everyone agrees to, at regular intervals, pass exactly one note to the person on their left. Alice can encrypt her message to Bob using his public key and then just pass it to her left. Anyone other than Bob who sees it can try decrypting it with their own private key and conclude nothing other than the fact that they can't read it. The guy on Alice's left doesn't know if the message is from Alice originally or if she got it from the guy to her right earlier. Of course the problem is that the message might get dropped before it reaches Bob, Bob can't act on Alice's message immediately (or someone would be able to work out, via timing, that Alice was the probable source), and the structure of the system (passing only one message and only at regular intervals instead of on demand) means that message throughput and latency are terrible compared to the pass-the-note-immediately model.

You can dick around with the implementation details on these basic models (and a few others), but the fact that they will be strictly less efficient than conventional communication channels is demanded by information theory, in the same way a channel with e.g. error correction will.

Further discussion of this subject is probably beyond the scope of this thread, but my point is that this isn't a question of efficiency in the sense that you might, say, tweak an automobile engine to make it slightly more efficient (which is how you seemed to be parsing `efficiency' in this context).

Kobayashi posted:

I'll be honest, I don't know what point you're trying to make. If it's that Prism was a bad example because its data comes from forced compliance, then sure, maybe I should have used Xkeyscore instead. Either way, there's nothing about the nature of the Prism program that undermines mass encryption per se, as I'm sure the Chinese would tell the NSA to gently caress off if it tried to compel access to an state-sponsored encrypted communication service. In that case, the NSA would have no choice but to actively compromise the service, either physically, or through the use of a software exploit of some kind. With that in mind, I come back to my original point, which is that mass encryption is a worthwhile goal because it substantially raises the level of effort required for mass (that's "mass," not "targeted") surveillance.
The point is that you can't encrypt your search habits to obscure them from the search engine, and you can't encrypt your email to prevent the mailbox provider from reading the headers. Encrypting email in transit won't prevent the service provider from scanning the content, and if you somehow convince everyone to send you only encrypted email content you'll have to move it somewhere else---and presumably out of your browser---to read it.

Also, for the nth time, the distinction between `mass' and `targeted' surveillance is bunk. It is predicated on a false belief about the capabilities and structure of the surveillance systems. Put this way---roughly one in seven people in the world use google monthly, and google does its level best to track the behaviour of each and every one of them individually. According to your `mass' and `targeted' dichotomy, you would be forced to argue that this is not in fact `mass' surveillance but rather `targeted' surveillance. And of course google is, per the Snowden disclosures, one of the major contributors of data for PRISM.

ShadowHawk posted:

Yes. I think they're an outlaw agency because they're breaking the law.
Room 641A is an absolutely terrible example to use to argue this. When Congress writes a bill and passes it and the President signs it, that's an according-to-Hoyle law. When the courts say that someone is behaving in accordance with that law, that's the normative basis for determining that they're not breaking the law.

I mean it stinks. I think it shouldn't be legal. But the fact that there's literally a law that says that literally that thing is permissible and the courts have dismissed actions against those involved on the basis of that law...well, to maintain that they're really breaking the law anyway is at very least an idiosyncratic approach to the subject.

Farmer Crack-rear end posted:

SubG, can you define for us what behavior would cause you to apply the label "outlaw agency"?
I think the label is a rhetorical device rather than an observation of fact. So I'm not sure it's a meaningful question. My point being that framing the subject that way rhetorically obscures the real issue, which is that the lawmakers, courts, and the executive branch have all been complicit in normalising the NSA behaviour we're talking about.

Elotana posted:

If our communications were all being monitored by angels, nobody would care in and of themselves.
Are you talking about actual metaphysical angelic entities, or are you saying that nobody would mind if the eavesdroppers were all really nice guys? Because if the latter, holy poo poo no.

Elotana posted:

Since they're not, however, we also have to take into account the capacity for harm that a person monitoring possesses.
You seem to be arguing about damages, which is a separate subject.

Say you write a scathing political novel about Rick Perry and right before you publish it the state of Texas passes a law making distribution of your book illegal. You might be able to point to e.g. the preorder sales as monetary damages, but that has nothing to do with the First Amendment violation that led to the damages. If Washington state passed a law prohibiting anyone from making racially insensitive jokes about the President the First Amendment violation would be the same, regardless of nobody being able to demonstrate monetary damages as a result of the law.

And of course in the case of e.g. free speech there is the presumption that the courts can grant relief from the violation---if there's a law that's restraining your free speech, if the law is found to be unconstitutional then your speech ends up being restored. This isn't something that can be necessarily done with privacy. If you are keeping your sexual orientation, political affiliations, religious beliefs, or whatever private and they're disclosed to third parties, there's no way for a court to magically restore your privacy by getting those parties to suddenly un-know those things about you.

Elotana posted:

A human at the FBI has a lot more potential to ruin your day than a human at Google
I feel like we've already been over this in the last couple pages, and, as I argued then, this appears to be a very narrow theoretical distinction. If you want to address the arguments I've already made I'd be happy to continue discussing the subject but I don't particularly repeat what I've already said if you simply disagree.

Elotana posted:

(in fact, the most destructive thing a human at Google can do is probably to simply refer your poo poo to the human at the FBI).
Even ignoring the possibility of them supplying information to other governments (if you concede the possibility that ratting someone out to the Chinese government, for example, might be more `destructive' than ratting them out to the FBI), I think that's a failure of imagination.

How do you feel about Facebook's experiments on manipulating users' emotions (689k of them---I wonder if that's `mass' or `targeted') or OKCupid's experiments intentionally mismatching users? I want to add that I'm not trying to draw some equivalence between Google searching gmail accounts for child pornography and turning users over to the Feds and Facebook trying to manipulate their users' emotions. I'm just proposing them as a basis for exploring you general `what possible harm' implication.

Elotana posted:

You can also switch providers. The choices are limited, but they exist[...].
Okay. Let's say I don't want to be subject of any surveillance, `metadata collection', profiling, or any of that. Outline for me what I need to do in order to accomplish this. Assume I'm a professional working a salaried job.

Elotana
Dec 12, 2003

and i'm putting it all on the goddamn expense account

SubG posted:

Are you talking about actual metaphysical angelic entities, or are you saying that nobody would mind if the eavesdroppers were all really nice guys?
Insofar as I'm talking about fictional characters, the former. The real issue is that eavesdroppers can either be corrupted or experience mission creep once the surveillance machinery is in place. Insofar as "really nice guys [who never talk to anyone else about what they hear, never develop any practical goals beyond listening, and are perfectly incorruptible forever]" are creatures of fiction, then yes, I don't think anyone would care. And by "anyone" I mean "enough people where surveillance would be a major social issue."

SubG posted:

You seem to be arguing about damages, which is a separate subject.
It's one thing if you're going to pull this infinitely-divisible fishmeching to the degree that my not-very-long post becomes a five-part multiquote, it's another if you're going to do that and then in the same post assert that I'm saying something that I quite clearly am not.

SubG posted:

I feel like we've already been over this in the last couple pages, and, as I argued then, this appears to be a very narrow theoretical distinction. If you want to address the arguments I've already made I'd be happy to continue discussing the subject but I don't particularly repeat what I've already said if you simply disagree.

Even ignoring the possibility of them supplying information to other governments (if you concede the possibility that ratting someone out to the Chinese government, for example, might be more `destructive' than ratting them out to the FBI), I think that's a failure of imagination.

How do you feel about Facebook's experiments on manipulating users' emotions (689k of them---I wonder if that's `mass' or `targeted') or OKCupid's experiments intentionally mismatching users? I want to add that I'm not trying to draw some equivalence between Google searching gmail accounts for child pornography and turning users over to the Feds and Facebook trying to manipulate their users' emotions. I'm just proposing them as a basis for exploring you general `what possible harm' implication.
I think that if you believe my post demonstrates a failure of imagination, you should go ahead and "explore those implications" right off the bat instead of citing the inducement of a fleeting feeling of disappointment. Get that fancy flying!

SubG posted:

Okay. Let's say I don't want to be subject of any surveillance, `metadata collection', profiling, or any of that. Outline for me what I need to do in order to accomplish this. Assume I'm a professional working a salaried job.
I didn't say you could choose not to be the subject of collection, I said you could choose who collects information on you. You pretty much have to interact with a large communications provider today, just like you have to choose a provider for any other necessity of life. But you can choose which one you interact with. You have legal remedies against them, particularly as a class. And while you can choose a government, it seems there's nowhere on earth you can choose not to be surveilled by the NSA.

SubG
Aug 19, 2004

It's a hard world for little things.

Elotana posted:

I didn't say you could choose not to be the subject of collection, I said you could choose who collects information on you. You pretty much have to interact with a large communications provider today, just like you have to choose a provider for any other necessity of life. But you can choose which one you interact with.
In that case how do you imagine one should set out evaluating which providers to choose? This is from comments in which you talk about differentiating between different privacy violations based on the capacity of the violator to cause harm, so I gather determining this capacity is part of it, but I'm not sure what you think this actually entails.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

No on both counts. Routing information is necessarily public on a public network. Attempts to rectify this problem involve e.g. routing every message to every node (and presuming only the intended recipient will be able to read it). And when I say such protocols are less efficient I mean they are strictly less efficient than conventional communication channels.

Say there are a bunch of people sitting at a table in a restaurant. Alice wishes to send a message to Bob. So she writes a note and passes it to Bob. Or, as is more common in internet communications, she hands the note to whoever happens to be seated next to her and it gets passed around the table until it reaches Bob. In this case it is public knowledge that Alice is talking to Bob. Even if Alice encrypted the message in such a way that only Bob can read it, if Bob, upon receipt of the note, immediately shoots the waiter, people are likely to infer that Alice's message might have something to do with Bob's actions. Especially if this happens more than once.

Now say that Alice wants to conceal the fact that she's talking to Bob. She can write a note to everyone at the table and simultaneously hand them out. Now just looking at the behaviour of the communication channel nobody can tell that she's `really' just talking to Bob. Of course if Bob shoots the waiter, then they'll still be able to figure it out via this additional information. So Alice starts sending notes out to everyone at regular intervals, regardless of whether or not she has anything to tell Bob. So occasionally Bob's activities will appear to correspond to the arrival of Alice's notes, but so will everyone else's activities---that is, there will be no statistical bias between the arrival of one of Alice's notes and any activity by any individual versus any other activity by any other individual. This, incidentally, is the mode of operation of numbers stations.

Of course everyone still knows that Alice is the one doing the talking and Bob has no way of replying without outing himself. But everyone at the table is a diehard cypherpunk and so is willing to do elaborate horseshit for ideological reasons, and so everyone agrees to start sending notes to everyone else at regular intervals. Now there's no way to tell who's really talking to whom. The downside of this is that what started out as the transmission of one message has now exploded into the transmission of n2 messages, where n is the number of people at the table.

Alternately, everyone could agree to post their public keys on the special of the day board (which everyone at the table can see). Everyone agrees to, at regular intervals, pass exactly one note to the person on their left. Alice can encrypt her message to Bob using his public key and then just pass it to her left. Anyone other than Bob who sees it can try decrypting it with their own private key and conclude nothing other than the fact that they can't read it. The guy on Alice's left doesn't know if the message is from Alice originally or if she got it from the guy to her right earlier. Of course the problem is that the message might get dropped before it reaches Bob, Bob can't act on Alice's message immediately (or someone would be able to work out, via timing, that Alice was the probable source), and the structure of the system (passing only one message and only at regular intervals instead of on demand) means that message throughput and latency are terrible compared to the pass-the-note-immediately model.

You can dick around with the implementation details on these basic models (and a few others), but the fact that they will be strictly less efficient than conventional communication channels is demanded by information theory, in the same way a channel with e.g. error correction will.

Further discussion of this subject is probably beyond the scope of this thread, but my point is that this isn't a question of efficiency in the sense that you might, say, tweak an automobile engine to make it slightly more efficient (which is how you seemed to be parsing `efficiency' in this context).

SubG posted:

The point is that you can't encrypt your search habits to obscure them from the search engine, and you can't encrypt your email to prevent the mailbox provider from reading the headers. Encrypting email in transit won't prevent the service provider from scanning the content, and if you somehow convince everyone to send you only encrypted email content you'll have to move it somewhere else---and presumably out of your browser---to read it.

Also, for the nth time, the distinction between `mass' and `targeted' surveillance is bunk. It is predicated on a false belief about the capabilities and structure of the surveillance systems. Put this way---roughly one in seven people in the world use google monthly, and google does its level best to track the behaviour of each and every one of them individually. According to your `mass' and `targeted' dichotomy, you would be forced to argue that this is not in fact `mass' surveillance but rather `targeted' surveillance. And of course google is, per the Snowden disclosures, one of the major contributors of data for PRISM.

Are you seriously trying to argue that all alternatives to the status quo are O(n^2) or worse? I mean, I can sort of see where you're coming from if you only consider "mass surveillance" to mean everything the NSA can do with Prism data + traffic/metadata analysis, but I define it as that, plus all the data the NSA gets from passive surveillance.

Likewise for the nth time, encryption doesn't do anything against compelled compliance, and usable communication protocols will probably always leak some metadata (though we can definitely decrease the amount of leakage). Perfect anonymity is not the goal. The goal is to increase the level of effort to maintain the same levels of surveillance -- to narrow the definition of mass surveillance. Broad, ubiquitous encryption largely eliminates passive surveillance. Better protocols reduce the amount of metadata leakage. Minimizing data collection and retention (like DuckDuckGo, which will be included as an option in iOS8) reduces the amount of information available through forced compliance. Forward secrecy makes it much more difficult for the NSA to build up indiscriminate dossiers over time. And so on and so on.

The NSA may want people to think that privacy is an all or nothing game, but that's not the case. It's a complicated, ever-changing set tradeoffs that very much depends on an individual's goals and the NSA's level of interest. Better tools, which is how this derail started, are essential to the cause and worth investing in.

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

Room 641A is an absolutely terrible example to use to argue this. When Congress writes a bill and passes it and the President signs it, that's an according-to-Hoyle law. When the courts say that someone is behaving in accordance with that law, that's the normative basis for determining that they're not breaking the law.

I mean it stinks. I think it shouldn't be legal. But the fact that there's literally a law that says that literally that thing is permissible and the courts have dismissed actions against those involved on the basis of that law...well, to maintain that they're really breaking the law anyway is at very least an idiosyncratic approach to the subject.
I asserted they were an outlaw agency because they were breaking current laws. You said they weren't an outlaw agency, and had full permission of Congress and the executive. I used 641A as an example of how they have absolutely no qualms with breaking the law, and there's no question that it was against the law at the time because it had to be made retroactively legal. There is no evidence the NSA has shown any interest in following the law than when they started 641A.

They are, simply put, completely out of control.

i am harry
Oct 14, 2003

Well all those words are a bit too boring for me. Here's something unrelated to NSA (currently), but very much related to surveillance:

Invisible vibrations reveal hidden soundtrack
https://www.youtube.com/watch?v=Ch2Cwi_fNqQ

This sure is a brave new world we're beginning to find ourselves in...

DOCTOR ZIMBARDO
May 8, 2006

JeffersonClay posted:

Surveillance by corporations in the free market is less onerous than surveillance by the state. That's the argument, right?

I want to return to this argument for a moment. Surveillance is a tool for the entire ruling class, whether State or private, to perpetuate an unbelievably inequitable and violent socio-economic-political system. Surveillance is unacceptable, because it is put to ends that are necessarily and unavoidably bad (for everyone outside the ruling class) because of the context that they are used in. From a strategic standpoint, its probably more sensible to attack government surveillance, because it is used in ways that are more materially detrimental to actual human lives.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

Are you seriously trying to argue that all alternatives to the status quo are O(n^2) or worse? I mean, I can sort of see where you're coming from if you only consider "mass surveillance" to mean everything the NSA can do with Prism data + traffic/metadata analysis, but I define it as that, plus all the data the NSA gets from passive surveillance.
There are a number of confusions going on here. Big o notation really doesn't belong here, as we're not talking about algorithmic complexity or anything along those lines. The behaviour you're attempting to describe with it isn't something that scales with the size of the input (number of notes in my example or size of message or whatever) but rather with the size of the communication network---n is the number of nodes. And big o notation gives an upper bound for the algorithm in question, while in the scenario I outline n2 messages is a lower bound---it presumes that the communication channel is brought up when Alice starts sending and closes down when she's done (after exactly one `real' note is passed). In a practical scenario this is going to be much, much worse as every node has to transmit even when it's got no real content to transmit, and this will lower the net efficiency every time. And that's actually good for the privacy goals---the number of needles has to be small compared to the size of the haystack. Again, that's by design and is necessary for the system to serve its purpose---providing secrecy for the communication endpoints.

The second model I described doesn't involve that kind of bandwidth usage (number of notes in my toy examples) but trades that off for some other loss in efficiency---lower message throughput, lack of reliability in message delivery, or whatever. And as I said, you can tweak around these models (and a couple others) to change the behaviours but the fundamental underlying reality is that the goal of providing endpoint secrecy is orthogonal to the efficiency goals of conventional communication channels, and this is true from first principles in information theory, not some lack of imagination on the part of implementers or something like that.

And the question, as I understand it, isn't coming up with some arbitrary `alternative to the status quo'. It's coming up with a communication method that offers endpoint secrecy against an adversary like the NSA. That's the only reason why you would consider this kind of wacky horseshit, and you'd abandon the idea of this wacky horseshit when you realise how impractical it is to implement and, simultaneously, that doing less won't offer you protection against an adversary like the NSA.

Further, all we're talking about here is passive surveillance. What I'm describing is the sort of minimum required effort to prevent the NSA from figuring out who you are and who you're talking to (and when and for how long) just by passively observing an opaque communication channel. What is required to offer meaningful resistance against active surveillance efforts balloons enormously---both for trivial active surveillance (e.g., inserting their own nodes into the communication network) and for more elaborate poo poo (like getting malware onto the exit nodes).

This is a fairly well-explored subject. If you're actually interested there's a lot of material out there that you can use to read up on the subject. This thread probably isn't a good place to continue discussion of this particular subject---I brought it up not to design such a protocol here, but rather to explain what I meant when I said that the efficiency of a channel that preserves the secrecy of the endpoints is necessarily lower than that of a conventional channel.

Kobayashi posted:

Better tools, which is how this derail started, are essential to the cause and worth investing in.
Absolutely. And placebo security is not worth investing in. What you've been talking about is placebo security, or in some cases placebo security which you have faith (your word) will one day be produced.

ShadowHawk posted:

I asserted they were an outlaw agency because they were breaking current laws. You said they weren't an outlaw agency, and had full permission of Congress and the executive. I used 641A as an example of how they have absolutely no qualms with breaking the law, and there's no question that it was against the law at the time because it had to be made retroactively legal. There is no evidence the NSA has shown any interest in following the law than when they started 641A.
Without reviewing the entire legislative history here, there was an existing law, the FISA, under which the NSA obtained surveillance authorisations from the FISC. The FISC in turn submits reports to Congress on this activity. It is in this context that the Room 641A activities were revealed. Following the disclosure, Congress passed an amendment to the law explicitly granting immunity from liability to those involved in the activities which were taken pursuant to an authorisation granted under the original law. They also expanded and extended the terms of the original law.

Your contention is that this is evidence of the NSA being an outlaw agency. This is a theory which was put forward by opponents of the bill before it's passage. On the floor of the Senate Barbara Boxer said she thought the programme was illegal. Chris Dodd also called it illegal and suggested that offering retroactive immunity was an affront to the rule of law. But Dodd's arguments were rejected by a vote of 66 to 32, and the bill went on to pass in the Senate by a 69 to 28 vote.

That is, your argument, or something very like it, was put to a vote and it lost.

Further, despite your assertion that `there's no question that it was against the law at the time because it had to be made retroactively legal', that is not what the law does. I grants retroactive immunity from liability for activities conducted pursuant to the provisions of the original law. It doesn't grant retroactive immunity for any criminal conduct.

The motivation for offering the immunity, per contemporaneous debate on the subject, was to avoid disclosure of details about surveillance programmes in open court, which might happen if suits against providers complying with FISA requests were allowed to proceed. This doesn't imply that the Executive and Congress really in their heart of hearts knew that the programme was illegal and they wanted to cover it up. The amendment's provisions just don't loving make sense if that's what you think their intent was. The amendment clarifies and refines what they think is permissible under the law, and protects the secrecy of activities taken under its auspices.

Again, I'm not saying that this makes it right or that I agree with it. But it is absolutely contrary to the facts to argue that the NSA operated without the knowledge and consent of the Congress, the Executive, and the Courts.

SubG fucked around with this message at 06:54 on Aug 21, 2014

Dr.Caligari
May 5, 2005

"Here's a big, beautiful avatar for someone"

i am harry posted:

Well all those words are a bit too boring for me. Here's something unrelated to NSA (currently), but very much related to surveillance:

Invisible vibrations reveal hidden soundtrack
https://www.youtube.com/watch?v=Ch2Cwi_fNqQ

This sure is a brave new world we're beginning to find ourselves in...

I wonder how much stuff like that the government has already had and the private sector is just catching up?

I suppose you might know something is up when you go to file a patent and the patent office tells you something along the lines of "this is already on file. Move along"

After reading excerpts from Sled Driver and other articles discussing the capabilities of the SR-71 (which first used in 19-loving-64), I can only imagine the kind of technology they have now.

Dr.Caligari fucked around with this message at 02:49 on Aug 21, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Dr.Caligari posted:

I wonder how much stuff like that the government has already had and the private sector is just catching up?
Using a high speed camera and photo analysis is a novel application (as far as I know), but recovering audio by monitoring the vibration of objects near a conversation has been done at least since the '40s, first using IR and more recently using lasers.

i am harry
Oct 14, 2003

Dr.Caligari posted:

I wonder how much stuff like that the government has already had and the private sector is just catching up?

I suppose you might know something is up when you go to file a patent and the patent office tells you something along the lines of "this is already on file. Move along"

After reading excerpts from Sled Driver and other articles discussing the capabilities of the SR-71 (which first used in 19-loving-64), I can only imagine the kind of technology they have now.

I've had a couple candid conversations with drone fliers who said the eyes that can zoom down from tens of thousands of miles high come with ears as well.

OJ MIST 2 THE DICK
Sep 11, 2008

Anytime I need to see your face I just close my eyes
And I am taken to a place
Where your crystal minds and magenta feelings
Take up shelter in the base of my spine
Sweet like a chica cherry cola

-Cheap Trick

Nap Ghost

i am harry posted:

I've had a couple candid conversations with drone fliers who said the eyes that can zoom down from tens of thousands of miles high come with ears as well.

Miles?

i am harry
Oct 14, 2003


No sorry, I'm very tired you know what I mean...although...... ...


...

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

i am harry posted:

No sorry, I'm very tired you know what I mean...although...... ...


...

I mean, we can measure the distance to the moon to millimeter accuracy, if the drone thing is true then they can factor out engine vibration and movement caused by ranging from an aircraft, the idea of doing laser microphony from a satellite isn't inherently insane given where modern technology's at these days. Obviously there'd be problems from atmospheric distortion and stuff too but we're getting pretty good at dealing with that kind of stuff to deal with other laser problems like thermal blooming.

The other way to interpret "comes with ears" is that drones mount some SIGINT gathering capability like monitoring frequency bands used for cell phones and stuff. Hell, you could probably even set up a stingray type setup that hacks common baseband chips and turns on the chip's GPS functionality or microphones.

Paul MaudDib fucked around with this message at 04:23 on Aug 21, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Paul MaudDib posted:

I mean, we can measure the distance to the moon to millimeter accuracy, if the drone thing is true then they can factor out engine vibration and movement caused by ranging from an aircraft, the idea of doing laser microphony from a satellite isn't inherently insane given where modern technology's at these days.
It's not technologically implausible, but probably impractical purely due to the fact that very few people (and presumably fewer persons of interest) have windows on their roofs, which would be required for use of a laser microphone from directly overhead. More oblique angles would both increase signal stability problems due to atmospheric interference (you have to look through more air the further from zenith over your target you get) and end up bouncing your return signal off to somewhere other than your satellite (unless you run into some very loving fortuitous pane placement).

If there are drones using laser microphones out there, I'd assume they're designed to manoeuvre into position and then perch---so something that looks like a wee helicopter (potentially delivered as a payload by a bigger drone like a Predator/Grey Eagle) rather than something like a Predator or a satellite.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

There are a number of confusions going on here. Big o notation really doesn't belong here, as we're not talking about algorithmic complexity or anything along those lines. The behaviour you're attempting to describe with it isn't something that scales with the size of the input (number of notes in my example or size of message or whatever) but rather with the size of the communication network---n is the number of nodes. And big o notation gives an upper bound for the algorithm in question, while in the scenario I outline n2 messages is a lower bound---it presumes that the communication channel is brought up when Alice starts sending and closes down when she's done (after exactly one `real' note is passed). In a practical scenario this is going to be much, much worse as every node has to transmit even when it's got no real content to transmit, and this will lower the net efficiency every time. And that's actually good for the privacy goals---the number of needles has to be small compared to the size of the haystack. Again, that's by design and is necessary for the system to serve its purpose---providing secrecy for the communication endpoints.

Big O is used colloquially to make apples-to-apples comparisons (i.e., best-, average-, worst-case) all the time. You were describing hypothetical systems for protecting the metadata that is necessary for routing messages, and pointing out some of the very real, categorical inefficiencies that they exhibit vs. today's architecture. Which is great, but we don't need perfect secrecy to improve the status quo. We can start with passive surveillance. Encrypt everything (including the metadata that isn't routing essential) and passive surveillance largely goes away. Move key management to the clients (which I admit opens up huge usability problems) and programs like Prism are neutered (along with the business models that Prism-susceptible companies rely on). Yes traffic analysis still exists and can be extremely revealing, but this arguably the kind of intelligence the NSA should be gathering, not building up comprehensive, searchable digital dossiers on every person on the planet.

SubG posted:

Further, all we're talking about here is passive surveillance. What I'm describing is the sort of minimum required effort to prevent the NSA from figuring out who you are and who you're talking to (and when and for how long) just by passively observing an opaque communication channel. What is required to offer meaningful resistance against active surveillance efforts balloons enormously---both for trivial active surveillance (e.g., inserting their own nodes into the communication network) and for more elaborate poo poo (like getting malware onto the exit nodes).

I don't know what you're talking about here. The NSA can't, to my knowledge, "trivially" compromise an SSL connection.

ShadowHawk
Jun 25, 2000

CERTIFIED PRE OWNED TESLA OWNER

SubG posted:

Without reviewing the entire legislative history here, there was an existing law, the FISA, under which the NSA obtained surveillance authorisations from the FISC. The FISC in turn submits reports to Congress on this activity. It is in this context that the Room 641A activities were revealed. Following the disclosure, Congress passed an amendment to the law explicitly granting immunity from liability to those involved in the activities which were taken pursuant to an authorisation granted under the original law. They also expanded and extended the terms of the original law.

Your contention is that this is evidence of the NSA being an outlaw agency. This is a theory which was put forward by opponents of the bill before it's passage. On the floor of the Senate Barbara Boxer said she thought the programme was illegal. Chris Dodd also called it illegal and suggested that offering retroactive immunity was an affront to the rule of law. But Dodd's arguments were rejected by a vote of 66 to 32, and the bill went on to pass in the Senate by a 69 to 28 vote.

That is, your argument, or something very like it, was put to a vote and it lost.

Further, despite your assertion that `there's no question that it was against the law at the time because it had to be made retroactively legal', that is not what the law does. I grants retroactive immunity from liability for activities conducted pursuant to the provisions of the original law. It doesn't grant retroactive immunity for any criminal conduct.

The motivation for offering the immunity, per contemporaneous debate on the subject, was to avoid disclosure of details about surveillance programmes in open court, which might happen if suits against providers complying with FISA requests were allowed to proceed. This doesn't imply that the Executive and Congress really in their heart of hearts knew that the programme was illegal and they wanted to cover it up. The amendment's provisions just don't loving make sense if that's what you think their intent was. The amendment clarifies and refines what they think is permissible under the law, and protects the secrecy of activities taken under its auspices.

Again, I'm not saying that this makes it right or that I agree with it. But it is absolutely contrary to the facts to argue that the NSA operated without the knowledge and consent of the Congress, the Executive, and the Courts.
They didn't have the executive branch on their side though. The DOJ didn't authorize the program. They even said it was illegal and unconstitutional. The NSA just went to a different part of the executive branch (namely, the office of the vice president's general counsel, the same guy who approved torture). If I go and hire Lionel Hutz to "approve" my drunk driving, it's still a crime. If I don't get prosecuted for it due to pragmatic or evidential reasons, it's still a crime. If I get the state legislature to retroactively legalize it due to political reasons, it was definitely a crime at the time.

And let's not forget, the constitution is the law. It is the highest law of the land, which every NSA employee has sworn an oath to uphold. Knowingly unconstitutional things are illegal.

Slanderer
May 6, 2007

ShadowHawk posted:

They didn't have the executive branch on their side though. The DOJ didn't authorize the program. They even said it was illegal and unconstitutional. The NSA just went to a different part of the executive branch (namely, the office of the vice president's general counsel, the same guy who approved torture). If I go and hire Lionel Hutz to "approve" my drunk driving, it's still a crime. If I don't get prosecuted for it due to pragmatic or evidential reasons, it's still a crime. If I get the state legislature to retroactively legalize it due to political reasons, it was definitely a crime at the time.

And let's not forget, the constitution is the law. It is the highest law of the land, which every NSA employee has sworn an oath to uphold. Knowingly unconstitutional things are illegal.

Congress disagrees, apparently.

JeffersonClay
Jun 17, 2003

by R. Guyovich

DOCTOR ZIMBARDO posted:

I want to return to this argument for a moment. Surveillance is a tool for the entire ruling class, whether State or private, to perpetuate an unbelievably inequitable and violent socio-economic-political system. Surveillance is unacceptable, because it is put to ends that are necessarily and unavoidably bad (for everyone outside the ruling class) because of the context that they are used in. From a strategic standpoint, its probably more sensible to attack government surveillance, because it is used in ways that are more materially detrimental to actual human lives.

I don't agree that surveillance is necessarily and unavoidably bad, but I certainly agree that it has the potential to be very bad, particularly as a method for the powerful to maintain their control. They could accomplish this both with state surveillance and with private surveillance. Attacking state surveillance alone would not provide any significant long term protection, although it might stymie some current efforts.

But more importantly, I cannot imagine any system to resist and deter private surveillance that does not involve the state opposing private surveillance. And I cannot imagine how the state could actually determine if private surveillance was occurring without conducting similar surveillance. How could the state know that google was targeting radicals and discrediting them without knowing who the radicals are?

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

We can start with passive surveillance. Encrypt everything (including the metadata that isn't routing essential) and passive surveillance largely goes away.
Listen: we're talking about passive surveillance. The communication systems I've been talking about are entirely about resistance to passive surveillance. PRISM is passive. `Encrypt everything [...] and passive surveillance largely goes away' is either a) completely false, b) predicated on a peculiar definition of `passive surveillance', or c) a very optimistic notion of `largely goes away' that entails the vast majority of what has been discussed in this thread not in fact going away or being affected at all.

Kobayashi posted:

I don't know what you're talking about here. The NSA can't, to my knowledge, "trivially" compromise an SSL connection.
I mean compromising a scheme theoretically providing endpoint secrecy such that it does not provide endpoint secrecy. SSL does not provide endpoint secrecy. It is a design goal of, for example, TOR to do so. LEAs can and do compromise the endpoint secrecy of TOR.

I mean the NSA can in fact trivially compromise at least some meaningful subset of all SSL connections, in that they have programmes to do so on a click-to-compromise basis. I suppose you could hand-wave about the exact definition about `trivial' or question the model of reality presented by the slides (which I think is an interesting subject in itself). But that has nothing to do with my point, which is contextualising your `better than the status quo' argument, specifically with respect to protecting metadata.

ShadowHawk posted:

They didn't have the executive branch on their side though. The DOJ didn't authorize the program. They even said it was illegal and unconstitutional.
I'd love a reference on that. The Attorney General (Michael Mukasey), head of the DOJ, wrote a letter to Congress (with DCI McConnell) in which he says `we strongly support enactment of [the amendment in question].' And specifically on the subject of retroactive immunity, it says, `Liability protection is the fair and just result and is neccessary to ensure the continued assistance of the private sector'. It later says:

The official position of the Administration as articulated by the head of the DOJ posted:

The Senate Intelligence Committee conducted an extensive study of the issue, which included the review of the relevant classified documents, numerous hearings, and testimony. After completing this comprehensive review, the Committee determined that providers had acted in response to written requests or directives stating that the activities had been authorized by the President and had been determined to be lawful[...].

I mean I could go on. The DOJ website has a whole loving page of cheerleading on the subject.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

Listen: we're talking about passive surveillance. The communication systems I've been talking about are entirely about resistance to passive surveillance. PRISM is passive. `Encrypt everything [...] and passive surveillance largely goes away' is either a) completely false, b) predicated on a peculiar definition of `passive surveillance', or c) a very optimistic notion of `largely goes away' that entails the vast majority of what has been discussed in this thread not in fact going away or being affected at all.

No wonder we disagree. I've been defining passive surveillance in terms of capturing data straight off the wire, storing it for later analysis. I've been defining active surveillance as essentially everything else, including Prism.

I include Prism in the active surveillance bucket because it requires the NSA to proactively do things, like 1) make note of "interesting" services and 2) attempt to compel compliance via the legal tools at its disposal. I see a sharp distinction between proactively identifying targets and then requesting data from what I assume to be is kind API-like service implemented by third parties (Prism) and reactively trawling through its own, first party "databases of ruin," compiled by blindly sucking up whatever data it could siphon off the wire.

I didn't think this was a particularly "peculiar" definition for passive surveillance, but if it is, then I can certainly understand the confusion.

SubG posted:

I mean the NSA can in fact trivially compromise at least some meaningful subset of all SSL connections, in that they have programmes to do so on a click-to-compromise basis. I suppose you could hand-wave about the exact definition about `trivial' or question the model of reality presented by the slides (which I think is an interesting subject in itself).

This is absolutely news to me (unless you're talking about exploiting the device itself) and I'd appreciate it if you could link to what you're referring to.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

No wonder we disagree. I've been defining passive surveillance in terms of capturing data straight off the wire, storing it for later analysis. I've been defining active surveillance as essentially everything else, including Prism.

I include Prism in the active surveillance bucket because it requires the NSA to proactively do things, like 1) make note of "interesting" services and 2) attempt to compel compliance via the legal tools at its disposal. I see a sharp distinction between proactively identifying targets and then requesting data from what I assume to be is kind API-like service implemented by third parties (Prism) and reactively trawling through its own, first party "databases of ruin," compiled by blindly sucking up whatever data it could siphon off the wire.
This is another example where the gaps between your model of the situation and the actual facts widens the more you try to apply it to the situation. The data is not, as a rule, captured incident to an NSA request. The data is captured as part of the regular day-to-day operation of the service providers. The data the NSA will query tomorrow has already been collected by Google today. The NSA, by overt design of the surveillance regime, does not expend meaningfully more effort on reviewing that data than it would conducting a search through a massive database (or rather through several massive databases). Google collects the data today. Tomorrow someone in the NSA wants to search it and the NSA gets it from Google. The following day someone else in the NSA wants to do the same search and does so with the data already in the NSA's databases. On all three days the NSA has expended the same amount of effort on collecting the data, to within rounding error for our purposes.

And all of this is really beside the point if we're talking about devising anonymity-preserving communication protocols, where the distinction is purely functional.

Kobayashi posted:

This is absolutely news to me (unless you're talking about exploiting the device itself) and I'd appreciate it if you could link to what you're referring to.
There are a couple of programmes involved with active attacks against SSL. FLYING PIG (which we know about from leaked GCHQ materials discussing the NSA programme) involves changing the route near the target system (so the target machine thinks it's talking to a server at 1.2.3.4 and it is, it's just not the machine at 1.2.3.4 that everyone else on the internet is talking to because a route table between the target and the NSA has been changed). Various parts of the QUANTUM programmes work by e.g. DNS cache poisoning (so the target machine looks up whatever.com and gets 1.2.3.4, an NSA server, instead of 4.3.2.1, the real server) and/or `conventional' traffic injection (the target machine thinks it's talking to 1.2.3.4 and every time it does the NSA races to put in its responses claiming to be from 1.2.3.4 before the responses from the real 1.2.3.4 can reach the target). Broadly all of these are MITM attacks that work because SSL essentially universally only cares that the name in the cert matches the name of the server in the URL and the cert is signed by a CA the browser recognises. Obtaining the cert is trivial for almost anyone (if you try to get a cert for whatever.com from most commercial CAs, they will make no attempt to verify that you're the legitimate operator of whatever.com, even if you're not an LEA). So the only interesting thing here is a) loving around with the route table or b) loving around with DNS. Neither of which are particularly difficult targets for an entity with the resources the NSA has. SSL just wasn't designed to resist attacks by that kind of threat.

Winkle-Daddy
Mar 10, 2007
Can you name a single CA that doesn't validate registration details? What I'm saying is "good luck getting a cert for a domain you don't own." Whether or not NSA and the GCHQ can through back channel means is a totally different question. Also, if a CA is found to not be doing proper validation they will be dropped as trusted by browsers thus ending that company.

SubG
Aug 19, 2004

It's a hard world for little things.

Winkle-Daddy posted:

Can you name a single CA that doesn't validate registration details? What I'm saying is "good luck getting a cert for a domain you don't own." Whether or not NSA and the GCHQ can through back channel means is a totally different question. Also, if a CA is found to not be doing proper validation they will be dropped as trusted by browsers thus ending that company.
I'm not particularly interested in getting into a discussion of what does and does not constitute meaningful validation for purposes of issuing an SSL cert, so let me rephrase my comment: the validation controls are certainly variable and there's no necessary correlation between the rigour of the validation and the value of the communications depending on it.

At any rate, as I implied and you stated outright, I think this is irrelevant to the point being made, which is that obtaining a cert that an arbitrary commodity browser will accept without complaint is not a technical barrier for e.g. the NSA.

mystes
May 31, 2006

SubG posted:

There are a couple of programmes involved with active attacks against SSL. FLYING PIG (which we know about from leaked GCHQ materials discussing the NSA programme) involves changing the route near the target system (so the target machine thinks it's talking to a server at 1.2.3.4 and it is, it's just not the machine at 1.2.3.4 that everyone else on the internet is talking to because a route table between the target and the NSA has been changed). Various parts of the QUANTUM programmes work by e.g. DNS cache poisoning (so the target machine looks up whatever.com and gets 1.2.3.4, an NSA server, instead of 4.3.2.1, the real server) and/or `conventional' traffic injection (the target machine thinks it's talking to 1.2.3.4 and every time it does the NSA races to put in its responses claiming to be from 1.2.3.4 before the responses from the real 1.2.3.4 can reach the target). Broadly all of these are MITM attacks that work because SSL essentially universally only cares that the name in the cert matches the name of the server in the URL and the cert is signed by a CA the browser recognises. Obtaining the cert is trivial for almost anyone (if you try to get a cert for whatever.com from most commercial CAs, they will make no attempt to verify that you're the legitimate operator of whatever.com, even if you're not an LEA). So the only interesting thing here is a) loving around with the route table or b) loving around with DNS. Neither of which are particularly difficult targets for an entity with the resources the NSA has. SSL just wasn't designed to resist attacks by that kind of threat.
Allow me to repeat myself again: yes, a random person wouldn't notice if one of these things happen, so the NSA could be using these techniques on lots of people without them noticing. However, they absolutely can't do this to the entire internet without people noticing. The number of people who are going to notice that a certificate's fingerprint or CA has changed may be tiny out of the number of people on the internet, but it would still take about 5 seconds for the whole world to find out if this was happening to 100% of attempted SSL connections. Therefore, even if they could intercept the communications of any single person they wanted, they can't do it to everyone at once, which in itself is a huge improvement from the current situation.

Please address this point if you are going to continue to claim that SSL provides no protection against the NSA's ability to passively intercept communciations.

SubG
Aug 19, 2004

It's a hard world for little things.

mystes posted:

Please address this point if you are going to continue to claim that SSL provides no protection against the NSA's ability to passively intercept communciations.
First of all, you're replying the tag end of an exchange that started many posts back in which Kobayashi talked about concealing `metadata'---specifically routing information---to maintain privacy. SSL provides no protection whatsoever against this sort of threat, and wasn't designed to.

As for the broader question about the efficacy of SSL to thwart NSA surveillance, I have already commented on it, repeatedly. In short SSL doesn't provide anonymity or resistance to voluntary disclosure of information to third party service providers (e.g., the search terms you type into a search engine or email stored with gmail or yahoo).

I'm not saying that you shouldn't use encryption. In fact I've repeatedly said precisely the opposite. But the threat model using SSL will protect you from is not surveillance by the NSA, because the NSA can already determine to a great fidelity what you're doing without looking into encrypted channels, and if what they learn leads them to conclude that they want to see more they have methods of doing so. Saying they could not apply these methods to every communication on the internet is true but irrelevant, as there is no scenario in which they would wish to do so. It's the wrong question to ask. Does SSL meaningfully present a barrier to NSA surveillance? No. If we accept the general picture painted by the recent disclosures, it does not.

mystes
May 31, 2006

SubG posted:

First of all, you're replying the tag end of an exchange that started many posts back in which Kobayashi talked about concealing `metadata'---specifically routing information---to maintain privacy. SSL provides no protection whatsoever against this sort of threat, and wasn't designed to.

As for the broader question about the efficacy of SSL to thwart NSA surveillance, I have already commented on it, repeatedly. In short SSL doesn't provide anonymity or resistance to voluntary disclosure of information to third party service providers (e.g., the search terms you type into a search engine or email stored with gmail or yahoo).

I'm not saying that you shouldn't use encryption. In fact I've repeatedly said precisely the opposite. But the threat model using SSL will protect you from is not surveillance by the NSA, because the NSA can already determine to a great fidelity what you're doing without looking into encrypted channels, and if what they learn leads them to conclude that they want to see more they have methods of doing so. Saying they could not apply these methods to every communication on the internet is true but irrelevant, as there is no scenario in which they would wish to do so. It's the wrong question to ask. Does SSL meaningfully present a barrier to NSA surveillance? No. If we accept the general picture painted by the recent disclosures, it does not.
If intercepting unencrypted internet traffic is so unnecessary why is the NSA doing it in the first place? Or conversely, isn't it a good idea to limit the NSA's ability to intercept traffic where possible even if it doesn't dramatically restrict their overall ability to collect information about people they're specifically interested in?

Yes, they can still use other programs like PRISM, but I don't see why you're so dismissive of SSL. PRISM seems to be limited to specific companies, so it can possibly be avoided by using another search engine, for example. Without SSL it wouldn't even matter whether your search engine was cooperating or not in the first place. This seems like a pretty big difference.

I guess good that you didn't actually specifically tell people to not use SSL, but going around saying that it's 100% useless is pretty much the same thing.

mystes fucked around with this message at 03:23 on Aug 23, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

mystes posted:

If intercepting unencrypted internet traffic is so unnecessary why is the NSA doing it in the first place? Or conversely, isn't it a good idea to limit the NSA's ability to intercept traffic where possible even if it doesn't dramatically restrict their overall ability to collect information about people they're specifically interested in?
I'm pretty sure I never said anything about recovering the clear text being unnecessary, because that's a kettle of fish. In any game of cat and mouse it is important to recognise that the cat and the mouse aren't playing the same game. In this case the mouse wins by not having his privacy violated by the NSA. The cat does not win just by compromising the privacy of the mouse (unless you believe the NSA is a cartoonish James Bond villain motivated entirely by love of evil or something), it wins by obtaining specific actionable intelligence about threats to national security. Unfortunately for the mouse this is a very unequal set of victory conditions---the mouse can lose the game without ever being specifically targeted by the cat.

In real world terms, the number of `metadata' records collected by the NSA is easily on the order of hundreds of millions. I don't know any way of deriving an accurate specific number, but we know that for example they have obtained all of several telco's call records for months at a time. By comparison the number of cases in which individuals have been targeted for acquisition of content (email, text messaging, photos, and so on) is much smaller: according to the 2013 Transparency Report for 2013 the number was in the high tens of thousands. This probably means that the total number of individuals actually affected was on the order of hundreds of thousands (because each communication has two endpoints, so target one person's email and you get mail to and from more than one person) or call it a million. Here's an infographic from the Washington Post covering a different time period.

While I absolutely do not want to minimise this from an individual privacy standpoint, from a technical standpoint this is loving peanuts. Like this is a dataset that's probably larger than the something awful forums, but not that much larger. The point being that if the NSA wanted to just slurp up every loving clear communication it could, it could slurp up a gently caress of a lot more than that. Put in slightly different terms, it is not what collection looks like when an organisation with the resources of the NSA decides to just go on an interception binge.

Why? The Post estimates that 89% individuals (or rather accounts) were `bystander' accounts. That's bad from a privacy standpoint, but it's also a loving nightmare from an analysis standpoint. Elsewhere the Post reports emails from analysts complaining about this---they're tasked with reviewing these intercepts in the hopes of getting actionable intel, but if the poo poo was flagged erroneously it's wasted effort.

This is a general problem not just for the NSA, but for data analysis in general---if the thing you're looking for occurs infrequently and your tests for it aren't perfect, you end up getting more false positives than true positives. So just collecting as much raw data---in this case raw intercepts---isn't a good strategy for locating those true positives.

mystes posted:

Yes, they can still use other programs like PRISM, but I don't see why you're so dismissive of SSL. PRISM seems to be limited to specific companies, so it can possibly be avoided by using another search engine, for example. Without SSL it wouldn't even matter whether your search engine was cooperating or not in the first place. This seems like a pretty big difference.
A search engine that won't comply with CALEA/FAA 702 requests?

I mean I've already asked several times already for someone to lay out an approach for the average person who wants to avoid surveillance and nobody's tried. You want to have a go?

mystes posted:

I guess good that you didn't actually specifically tell people to not use SSL, but going around saying that it's 100% useless is pretty much the same thing.
I didn't say that.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

mystes posted:

Allow me to repeat myself again: yes, a random person wouldn't notice if one of these things happen, so the NSA could be using these techniques on lots of people without them noticing. However, they absolutely can't do this to the entire internet without people noticing.

They could use these techniques on anyone accessing certain key servers. If the NSA actually obtains the real certificate that a site is using then there wouldn't be a different fingerprint to notice, and they have an entire Key Acquisition Service devoted to doing exactly that.

Odds are good that they wouldn't do that to Jane's Teapot Emporium but for someone like Google? Yeah, they'd just get a copy of the client-facing certificates and call it a day, no evidence for end-users to notice.

Paul MaudDib fucked around with this message at 05:44 on Aug 23, 2014

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

This is another example where the gaps between your model of the situation and the actual facts widens the more you try to apply it to the situation. The data is not, as a rule, captured incident to an NSA request. The data is captured as part of the regular day-to-day operation of the service providers. The data the NSA will query tomorrow has already been collected by Google today. The NSA, by overt design of the surveillance regime, does not expend meaningfully more effort on reviewing that data than it would conducting a search through a massive database (or rather through several massive databases). Google collects the data today. Tomorrow someone in the NSA wants to search it and the NSA gets it from Google. The following day someone else in the NSA wants to do the same search and does so with the data already in the NSA's databases. On all three days the NSA has expended the same amount of effort on collecting the data, to within rounding error for our purposes.


Of course Google collects the data. But the amount of data that is collected (by Google) and how long the data is retained (again, by Google) is up to Google. I'm not talking about surveillance from this point forward any more than I was talking about perfect anonymity earlier. I am talking about surveillance from this point back. Hence my reference to Snowden's notion of "databases of ruin" -- the ability for the FBI or DEA to go back and dig up dirt on someone after the fact is limited by the retention periods of the companies that feed into Prism.

SubG posted:

And all of this is really beside the point if we're talking about devising anonymity-preserving communication protocols,

SubG posted:

First of all, you're replying the tag end of an exchange that started many posts back in which Kobayashi talked about concealing `metadata'---specifically routing information---to maintain privacy. SSL provides no protection whatsoever against this sort of threat, and wasn't designed to.

You seem to operating under the assumption that any system that does not provide perfect anonymity is functionally identical to the NSA, which is absolutely not the case. Nor have I ever claimed such a system exists. I re-checked my posts in this thread. I posted about Signal, which piqued my interest because it's a usable voice app for iOS with client-side key management, and it will soon have chat capabilities. I also claimed that people are working on alternative communication protocols which attempt to minimize the amount of metadata leakage.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

I'm not talking about surveillance from this point forward any more than I was talking about perfect anonymity earlier. I am talking about surveillance from this point back.
And that's why you're speaking nonsense. Whether you want to formulate this as a question of privacy/secrecy, general information theory, or as an aspect of protocol design, it doesn't matter whether party A collects the data, party B collects the data, or if party A collects the data and gives it to party B. Either the protocol works or it doesn't. Either an attacker can subvert it or not. This isn't a question about absolute or perfect security, as you keep trying to assert. It's about whether or not it is broken.

And leaving that aside, there's no difference in practice either as the disclosures which are the subject of this thread have demonstrated at great length. That is: any argument based on the idea that there's any meaningful distinction between what Google knows and what the NSA can learn is an argument predicated on a demonstrably false premise.

Put in other words, there is no way in which we can construe your distinctions in which they are relevant to the subject.

And I'm not even going to get into the further nonsense you seem to be implying about Google's data retention policy and the NSA's. Are you seriously suggesting that there's some tiny window of time for private data retention at Google (and presumably other providers)? I don't know. I don't think I want to know.

Kobayashi posted:

You seem to operating under the assumption that any system that does not provide perfect anonymity is functionally identical to the NSA, which is absolutely not the case.
No, I'm talking about what poses or does not pose meaningful protection of metadata against an adversary like the NSA.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

And that's why you're speaking nonsense. Whether you want to formulate this as a question of privacy/secrecy, general information theory, or as an aspect of protocol design, it doesn't matter whether party A collects the data, party B collects the data, or if party A collects the data and gives it to party B. Either the protocol works or it doesn't. Either an attacker can subvert it or not. This isn't a question about absolute or perfect security, as you keep trying to assert. It's about whether or not it is broken.

And leaving that aside, there's no difference in practice either as the disclosures which are the subject of this thread have demonstrated at great length. That is: any argument based on the idea that there's any meaningful distinction between what Google knows and what the NSA can learn is an argument predicated on a demonstrably false premise.

Put in other words, there is no way in which we can construe your distinctions in which they are relevant to the subject.

If you don't see a distinction between one entity having all data from the entire Internet for what is essentially its entire history, and that same entity having to proactively request some targeted subset of data from a closed set of data providers, then I guess we'll just have to leave it at that.

SubG posted:

And I'm not even going to get into the further nonsense you seem to be implying about Google's data retention policy and the NSA's. Are you seriously suggesting that there's some tiny window of time for private data retention at Google (and presumably other providers)? I don't know. I don't think I want to know.

I don't know why you insist on being so categorical, because the specifics matter. I believe Google search results are retained for 9 months. Deleted emails stick around for maybe 30-60 days. Archived email and regular chats presumably stick around forever, in which case your point about the NSA and Google knowing effectively the same amount of information is true, but I doubt something like OTR chats (built into Gchat!) even falls under Prism. While Google does presumably know the keys, I highly doubt they're silently keeping a copy of every OTR conversation for the NSA. So, even something as simple as telling people to use the OTR feature of Gchat instead of email when they're talking about whether or not they should come to their parents would have material impact on whether they might get ensared in some future NSA dragnet.

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

If you don't see a distinction between one entity having all data from the entire Internet for what is essentially its entire history, and that same entity having to proactively request some targeted subset of data from a closed set of data providers, then I guess we'll just have to leave it at that.
This is a mischaracterisation of the situation on every level. All data at all times versus some imagined narrow collection regime is a false dichotomy. The `closed set of data providers' is all of them that are not going to refuse to comply with CALEA/FAA 702/whatever requests. And however we wish to construe the extent of that set, it is demonstrably true that it includes all of the providers that matter (telcos, Google, Yahoo, Microsoft, Apple, and so on).

The question was what could be done to obscure `metadata' from the NSA. None of what you've said has any bearing, either from a theoretical security-as-a-serious-discipline standpoint or from a practical whatever-works-and-drat-the-theory standpoint. You can argue your finely calibrated distinctions all you want but they have no bearing on reality.

  • Locked thread