Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

The question was what could be done to obscure `metadata' from the NSA.

No, it wasn't.

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

No, it wasn't.

Kobayashi posted:

To be clear, I'm not claiming that there are perfectly secure, anonymous alternatives for digital communication. I was simply trying to say that newer communication protocols attempt to protect the metadata too.
You can of course click on the link to go the the original post. This line of discussion has flown from my response that you can't conceal routing information on a public network, describing what's actually necessary to mitigate this, talking about `efficiency' of such communication channels, addressing your confusion about `passive' surveillance, and so forth.

I await a response in which you make some infinitesimal distinction between `attempt to protect the metadata' and `obscure "metadata"' or something along those lines.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

You can of course click on the link to go the the original post. This line of discussion has flown from my response that you can't conceal routing information on a public network, describing what's actually necessary to mitigate this, talking about `efficiency' of such communication channels, addressing your confusion about `passive' surveillance, and so forth.

I await a response in which you make some infinitesimal distinction between `attempt to protect the metadata' and `obscure "metadata"' or something along those lines.

Yeah, I already qualified that quote for Paul MauDib, but you keep beating on the "the only metadata that matters is routing information" drum. There's a lot more that constitutes metadata, from non-routing specific email headers and the subject line, to image EXIF data, to location and device information, to contact display names, and so on and so on. There's no reason for any of that to be transmitted unencrypted. In this context, claiming that modern communication protocols "attempt to protect the metadata" is not an infinitesimal distinction.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

Yeah, I already qualified that quote for Paul MauDib, but you keep beating on the "the only metadata that matters is routing information" drum. There's a lot more that constitutes metadata, from non-routing specific email headers and the subject line, to image EXIF data, to location and device information, to contact display names, and so on and so on. There's no reason for any of that to be transmitted unencrypted. In this context, claiming that modern communication protocols "attempt to protect the metadata" is not an infinitesimal distinction.
Once again there's some confusion here. In the context of FAA 702, which is to say the context of NSA surveillance, `metadata' is data which is disclosed to a third party as part of using a communication channel. EXIF image data is perhaps `metadata' in some broad colloquial sense, but not the sense that is relevant in this discussion. I use routing information as an exemplar of `metadata' because a) it's something everyone is familiar with, and b) it's something that is integral to NSA's surveillance efforts.

If you want to argue around this somehow, you'll find yourself in the position of asking whether or not the specific item of data you're considering is disclosed to one or more third parties as a part of using the communication system. If it is, then it's metadata. If you encrypt it, you still have to disclose it to the third party (as part of conducting the communication). So you can't hide it from them. And once you've done that, they have that information, and they can disclose that information to the NSA. Under FAA 702, they can be compelled to disclose it.

If the item of data in question does not have to be disclosed to a third party as part of the communication, then you can encrypt it to conceal it from the third party. It is therefore not `metadata' in the sense that is relevant to this discussion, it is just `data'. And by all means encrypt it. Of course encrypting this data by itself increases your chances of being targeted by the NSA for more aggressive surveillance, and all the `real metadata' associated with your communications is, per the preceding argument, still available to the NSA.

That's the scope of the problem. Encrypting EXIF data is a loving fart in the wind. If it makes you feel better, by all means do it. It might help protect you against unintended disclosure of the EXIF data to your friends on facebook or whatever the gently caress, but it is laughable in the context of trying to thwart NSA surveillance efforts.

And as an aside the post you link to as a clarification is precisely the same post I quoted. The thing I quoted was in fact the qualification you were offering Paul MauDib.

SubG fucked around with this message at 23:39 on Aug 27, 2014

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

SubG posted:

Once again there's some confusion here.

You don't say. In addition to an oddly narrow definition of metadata, you seem inclined to treat "NSA surveillance efforts" as some monolithic thing, instead of a complicated patchwork of techniques -- physical, digital, and legislative -- with differing costs, levels of effectiveness, and mitigation techniques. I think you're oversimplifying, but we're clearly talking past each other, so I'll leave it at that.

SubG
Aug 19, 2004

It's a hard world for little things.

Kobayashi posted:

You don't say. In addition to an oddly narrow definition of metadata, you seem inclined to treat "NSA surveillance efforts" as some monolithic thing, instead of a complicated patchwork of techniques -- physical, digital, and legislative -- with differing costs, levels of effectiveness, and mitigation techniques. I think you're oversimplifying, but we're clearly talking past each other, so I'll leave it at that.
The `oddly narrow' definition I'm using is the one which is used in statute, by the courts, and in the leaked internal documents from the NSA. It's the one that's germane for discussion of NSA surveillance, and more broadly surveillance under US law.

And I'm not treating NSA surveillance as a monolithic entity. Not by implication or by explicit declaration; as far as I know I'm the one that's gone into the greatest detail about the different methods and capabilities employed by the NSA, at least in the past several pages.

I have framed my discussion about technical approaches to thwarting NSA surveillance in terms of passive collection of `metadata' largely because it is this context you have insisted upon. This actually makes sense (for your rhetorical position), because passive collection of `metadata' is the poo poo the NSA gets more or less for free. It's the shallow end of the surveillance pool, the kiddie poo poo. Put in different terms, it's the easiest and most straightforward threat model if we're talking about devising a communication scheme resistant to NSA surveillance. It is a lower bound on the difficulty. If you want to broaden the discussion to active methods the situation is, as I've already explained at some length, much, much worse.

V. Illych L.
Apr 11, 2008

ASK ME ABOUT LUMBER

I don't actually know enough about this stuff to weigh in, but you're coming across as a real pedantic prick, SubG. Just saying.

Nintendo Kid
Aug 4, 2011

by Smythe

Winkle-Daddy posted:

Can you name a single CA that doesn't validate registration details? What I'm saying is "good luck getting a cert for a domain you don't own." Whether or not NSA and the GCHQ can through back channel means is a totally different question. Also, if a CA is found to not be doing proper validation they will be dropped as trusted by browsers thus ending that company.

Here's an article about a Dutch certificate authority that issued SSL certs for google domains in 2011: http://www.computerworld.com/article/2510797/security0/hackers-stole-google-ssl-certificate--dutch-firm-admits.html

In that case, it was done by the people first hacking into the CA's network and taking control of computers there in order to issue and sign the certs. Other cases happen where the guy in charge of monitoring requests at a third-string CA didn't pay close enough attention, and some cases where it seems that the CA involved may have been directly paid off by malicious users.


mystes posted:

However, they absolutely can't do this to the entire internet without people noticing.

This is immaterial because they don't do anything to the entire internet now or in the past (with the possible exception of the very early days when there was very little traffic). In fact, since statistics works, they should only need to do it to a representative sample of not-otherwise-targeted people for short bursts of time to be able to pick up on any sorts of trends they want to investigate further.

mystes posted:

If intercepting unencrypted internet traffic is so unnecessary why is the NSA doing it in the first place?

Because when you're intercepting encrypted traffic and traffic of specific targets, you will always have at least some unencrypted stuff in your initial nets. So why not keep it?

SubG
Aug 19, 2004

It's a hard world for little things.

V. Illych L. posted:

I don't actually know enough about this stuff to weigh in, but you're coming across as a real pedantic prick, SubG. Just saying.
Sorry that you feel that way, but the subjects under discussion in my last couple posts are, broadly, cryptography and surveillance law. Neither of these are subjects which permit much imprecision. Or at least that's my opinion as a real pedantic prick who does know enough about this stuff to weigh in.

Winkle-Daddy
Mar 10, 2007

Nintendo Kid posted:

Here's an article about a Dutch certificate authority that issued SSL certs for google domains in 2011: http://www.computerworld.com/article/2510797/security0/hackers-stole-google-ssl-certificate--dutch-firm-admits.html

In that case, it was done by the people first hacking into the CA's network and taking control of computers there in order to issue and sign the certs. Other cases happen where the guy in charge of monitoring requests at a third-string CA didn't pay close enough attention, and some cases where it seems that the CA involved may have been directly paid off by malicious users.

I meant a CA doing it willingly. Breaking in and issuing certs isn't that surprising. But the idea that any Joe blow can find a shady CA willing to process a CSR for google.com is pretty laughable.

SubG
Aug 19, 2004

It's a hard world for little things.

Winkle-Daddy posted:

I meant a CA doing it willingly. Breaking in and issuing certs isn't that surprising. But the idea that any Joe blow can find a shady CA willing to process a CSR for google.com is pretty laughable.
Many (most?) CAs will process a CSR for literally any domain if it passes whatever validation they do. For many CAs this is as simple as an automated email to either one of the contacts listed in the domain's zone file or, sometimes, to an address selected by the requester from a number of common addresses at the domain root (e.g. webmaster@whatever.com).

How reliable this validation is is a subject worthy of its own discussion. Circa late 2009 there was a well publicised example of someone obtaining a cert that IE, Chrome, and Safari accepted as valid for paypal.com. This was due to a input validation error on the CA side (permitting a CN with a NULL in the middle) which has presumably been fixed. I think the example is nevertheless instructive.

Winkle-Daddy
Mar 10, 2007

SubG posted:

Many (most?) CAs will process a CSR for literally any domain if it passes whatever validation they do. For many CAs this is as simple as an automated email to either one of the contacts listed in the domain's zone file or, sometimes, to an address selected by the requester from a number of common addresses at the domain root (e.g. webmaster@whatever.com).
When I did this for a bunch of domains, every single one of them was done via a goddamn phone call to the owner listed in the whois information. That was 4 or 5 different CAs. I was not the owner, I had to call the CA, give them the domain name, confirm the phone number in whois, tell the owner to wait for a phone call at that number...you get the idea. It would have owned if it was just an e-mail. As far as I know, the only thing we really used e-mail for was for registrar transfers of domains, but not domain verification.

quote:

How reliable this validation is is a subject worthy of its own discussion. Circa late 2009 there was a well publicised example of someone obtaining a cert that IE, Chrome, and Safari accepted as valid for paypal.com. This was due to a input validation error on the CA side (permitting a CN with a NULL in the middle) which has presumably been fixed. I think the example is nevertheless instructive.
lol, I'd be very curious to know what the process looked like that went into approving this. I'm guessing it was Comodo since they are terrible.

SubG
Aug 19, 2004

It's a hard world for little things.

Winkle-Daddy posted:

When I did this for a bunch of domains, every single one of them was done via a goddamn phone call to the owner listed in the whois information. That was 4 or 5 different CAs. I was not the owner, I had to call the CA, give them the domain name, confirm the phone number in whois, tell the owner to wait for a phone call at that number...you get the idea. It would have owned if it was just an e-mail. As far as I know, the only thing we really used e-mail for was for registrar transfers of domains, but not domain verification.
Even with phone verification the domain validation is only going to be as good as the CA's resistance to DNS fuckery on that particular day.

Winkle-Daddy posted:

lol, I'd be very curious to know what the process looked like that went into approving this. I'm guessing it was Comodo since they are terrible.
The issuing CA for the proof of concept cert that was released was ipsca, who is now defunct. The guy who disclosed the exploit at blackhat was able to obtain certs via the same method from other CAs. Or at least reported being able to.

I assume that the exploit has been fixed by all of the CAs anyone cares about (although there's over a hundred of 'em, so it wouldn't surprise me if at least one of them hosed it up). So I'm not arguing that this particular thing would work. It just illustrates the fact that the reliability of the cert is only as good as the CA's validation of the CSR.

i am harry
Oct 14, 2003

Paul MaudDib posted:

I mean, we can measure the distance to the moon to millimeter accuracy, if the drone thing is true then they can factor out engine vibration and movement caused by ranging from an aircraft, the idea of doing laser microphony from a satellite isn't inherently insane given where modern technology's at these days. Obviously there'd be problems from atmospheric distortion and stuff too but we're getting pretty good at dealing with that kind of stuff to deal with other laser problems like thermal blooming.

The other way to interpret "comes with ears" is that drones mount some SIGINT gathering capability like monitoring frequency bands used for cell phones and stuff. Hell, you could probably even set up a stingray type setup that hacks common baseband chips and turns on the chip's GPS functionality or microphones.

http://www.theguardian.com/world/video/2014/aug/14/surveillance-satellite-launched-from-california-video

"A new surveillance satellite is launched into orbit from Vandenberg Air Force Base in California. The satellite, which is owned by DigitalGlobe, will be able to capture images at 'the highest resolution currently available from space', using an infrared sensor that will allow it to see through fog and smoke. DigitalGlobe says the device will be used for crop mapping, as well as being able to identify the species and relative health of plants."

KuNova
Oct 12, 2005
I REPORT MODERATORS BECAUSE I'M FUCKING RETARDED

SubG posted:

Sorry that you feel that way, but the subjects under discussion in my last couple posts are, broadly, cryptography and surveillance law. Neither of these are subjects which permit much imprecision. Or at least that's my opinion as a real pedantic prick who does know enough about this stuff to weigh in.

Do you have any recommendations for formal reading?

SubG
Aug 19, 2004

It's a hard world for little things.

KuNova posted:

Do you have any recommendations for formal reading?
In basic cryptography? Goldreich's Foundations of Cryptography books as general introduction, and Stinson's Cryptography: Theory and Practice as a general textbook. Menezes' Handbook of Applied Cryptography as a reference work. All of these presume that you know basic number theory.

In terms of broader protocol design-level stuff, Ferguson and Schneier's Practical Cryptography/Cryptography Engineering is pretty good. It isn't a formal text like the others I just mentioned, and is aimed more at a motivated lay reader. But it covers more of the conceptual stuff that's relevant to the discussion in this thread than the crunchy, nuts-and-bolts crypto texts.

All of these are texts that cover the fundamentals and are are still valuable today if you're looking to learn the subject. But crypto is one of those subjects where some portion of the material goes out of date by the time you can print a textbook covering it. So if you want to keep on top on the latest poo poo, particularly in cryptanalysis, QC, and that sort of thing you're going to end up reading papers published online and stuff like that.

There are a couple of places that have offered online classes in introductory cryptography. I've never looked at any of them so I can't speak to their quality, but it's something that might be worth looking into if you're interested in self-learning the subject.

Arkane
Dec 19, 2006

by R. Guyovich
Based on watching the arguments for an hour, pretty sure the ACLU suit is going to be successful in the second circuit court in ACLU vs Clapper.

What's the time-line on the Supreme Court here, assuming they decide to hear the case? Are we more likely to hear a decision in 2015 or 2016?

It's live-streaming here: http://www.c-span.org/video/?321163-1/aclu-v-clapper-oral-arguments-phone-record-surveillance

computer parts
Nov 18, 2010

PLEASE CLAP
So apparently the whole "nude photo" controversy with Apple and iCloud was directly caused by hackers using tools designed for the NSA/police.

i am harry
Oct 14, 2003

computer parts posted:

So apparently the whole "nude photo" controversy with Apple and iCloud was directly caused by hackers using tools designed for the NSA/police.

I'm so glad the FBI is infestigating this incident. :rolleyes:

SubG
Aug 19, 2004

It's a hard world for little things.

computer parts posted:

So apparently the whole "nude photo" controversy with Apple and iCloud was directly caused by hackers using tools designed for the NSA/police.
Perhaps I'm just being cynical, but it's a hacking tool developed in Russia. I think it was designed for hacking, and it has a nice UI so that it can be sold into the now-thriving LEA market.

But leaving that aside, I think the real story is in Apple's statement: `none of the cases we have investigated has resulted from any breach in any of Apple’s systems including iCloud® or Find my iPhone'. This is probably literally true, and I think it underlines the real problem.

When some guy steals your credit card and you suddenly find a bunch of fraudulent charges on your bill you can equivalently say that the action did not result from a breach on any of the credit card company's systems. But stating the problem in these terms obscures the fact that the problem is the system. The way it is designed. If the system can fail without a breach of any of the individual components, then the system is broken.

That this is not generally appreciated and is not the normal manner of framing the issue illustrates a shifting of the presumption of responsibility from the service provider to the individual consumer. In ye olden days if someone passed a bad check against your account that was bank fraud---that is, the offence was understood to be one perpetrated by the guy forging the check against the financial institution who accepted it. Now, if someone fraudulently uses a credit card belonging to someone else, it is `identity theft'---that is, the offence is understood to be one perpetrated by the guy using the credit card against the individual who owns it. I think something similar has happened throughout the privacy and broader information security arena, and that Apple statement encapsulates it perfectly.

Apple wishes us to believe that this happening is not evidence of the system being fundamentally broken. This involves us accepting that random 4channers (or whatever) obtaining your personal information isn't an failure of the system. It's the failure of the victim. Because if we don't accept that, then the fault must be with the ones that designed and operated a fundamentally hosed up system (and, of course, the ones exploiting it).

That is, the official corporate position is that the responsibility for maintaining the security of the customer's data does not lie with the multi-billion dollar technology corporation offering the service which holds and handles the data, but rather with the completely nontechnical enduser.

And, I'll hasten to add, that there's a certain amount of grim inevitability about this position---the enduser has to have access to the data (it's their data after all) and there's no way a company can `outsmart' the enduser to prevent them from making any mistake which might lead to unintentional disclosure.

But I think the situation highlights the fundamental problem(s) with privacy in the modern world I've been talking about. Encryption won't solve this problem. It didn't.

Salt Fish
Sep 11, 2003

Cybernetic Crumb

SubG posted:


...

Apple wishes us to believe that this happening is not evidence of the system being fundamentally broken. This involves us accepting that random 4channers (or whatever) obtaining your personal information isn't an failure of the system. It's the failure of the victim. Because if we don't accept that, then the fault must be with the ones that designed and operated a fundamentally hosed up system (and, of course, the ones exploiting it).

...

But I think the situation highlights the fundamental problem(s) with privacy in the modern world I've been talking about. Encryption won't solve this problem. It didn't.

I have two small points to make here. First, as far as I can tell each of these cases was a password being cracked, guessed, or otherwise known. If that is the case then it is the victims fault. If you upload nudes of yourself and attempt to seal them away with the password pudding1 then you are responsible, not Apple. Additionally, people should know better than to send nude photographs of themselves anywhere in the year 2014. The easiest way to keep those photos from not leaking is to simply not take them. If you have to take them, don't upload them. If you have to upload them remove them once your significant other has finished jerking it to them.

Second, saying encryption didn't solve the problem of a password being cracked is a bad way of phrasing things. This isn't a problem that encryption is designed to solve. It's like going out of your way to point out that your car door locks didn't stop someone from stealing your rims.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

Salt Fish posted:

I have two small points to make here. First, as far as I can tell each of these cases was a password being cracked, guessed, or otherwise known. If that is the case then it is the victims fault. If you upload nudes of yourself and attempt to seal them away with the password pudding1 then you are responsible, not Apple. Additionally, people should know better than to send nude photographs of themselves anywhere in the year 2014. The easiest way to keep those photos from not leaking is to simply not take them. If you have to take them, don't upload them. If you have to upload them remove them once your significant other has finished jerking it to them.

I agree with your point regarding encryption, but I agree with SubG that this is a failure of the system, not the users. Blaming the victims in this case is the like blaming Home Depot customers for using credit/debit cards to buy lumber. While it's not entirely clear what happened yet, Apple clearly bears a lot of responsibility here. First, one or more of their services did not rate limit login attempts. This may not have been actively exploited in this particular case, but it is an inexcusable fuckup on their part. Second, Apple's use of security questions for account verification is outdated and known to be incompatible with human behavior. Worse, it looks like their stepwise verification flow leaked information to attackers. Finally, Apple recommended 2FA as a way for people to protect themselves, but their implementation doesn't even cover iCloud backups. I loves me some Apple, but they're not perfect, and this is an enormous black eye ahead of their rumored health and payments initiatives.

E: Not that it's any of our business at all, but Mary Winstead claims she did just what you recommend -- delete the photos.

Kobayashi fucked around with this message at 23:33 on Sep 3, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Salt Fish posted:

I have two small points to make here. First, as far as I can tell each of these cases was a password being cracked, guessed, or otherwise known.
If a password cracking tool posted to github can brute force a password, then the service provider could identify the password as weak to brute forcing, either at creation time or as part of routine checking against the stored hashes. This practice---e.g. running a password cracker against your own user's password hashes to identify weak/guessable passwords---has been considered part of general best practices for literally decades.

This is not something not done as a routine matter of course by service providers for a few reasons: asking users with bad passwords to change their password irritates the users; it's a resource drain on your support staff who end up having to deal with those irate users; and, importantly, if the provider took this sort of action it could be construed as an acceptance of responsibility for this kind of data security on the part of the provider.

Password security is something that necessarily involves the end user, yes. But the end user doesn't determine the password policy, and a password policy which would prevent the sort of thing that apparently has been happening with Apple is absolutely something which qualifies as minimum best practice.

I want to hasten to add that this is actually a technical side-issue. The central issue is that the data, infrastructure, and technical expertise are in one place and the nominal responsibility is in another. Whatever the reasons for this, and whether or not practical reality requires that this is the case, it is a problem.

Salt Fish posted:

Second, saying encryption didn't solve the problem of a password being cracked is a bad way of phrasing things. This isn't a problem that encryption is designed to solve.
That is in fact my point.

Crack
Apr 10, 2009
I think the best solution for this would be apple not saving old backups of users devices on their servers, patching a 2 year+ vulnerability that allows the backups to be downloaded without 2fa and clearly nagging users to use 2fa or get a better password. Past that, if your famous celebrity user decides to go with their cats name as a password and "what was your first school" as a security question then uploads nudes quite frankly they are going to get a deserved lesson. If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.

I do think there should be a fairly strict password policy as well but no one wants to carry around a note with their password on with their iPhone or whatever.

WhiskeyJuvenile
Feb 15, 2002

by Nyc_Tattoo
Choose five words and allow voice recognition to input your password

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

Crack posted:

Past that, if your famous celebrity user decides to go with their cats name as a password and "what was your first school" as a security question then uploads nudes quite frankly they are going to get a deserved lesson. If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.

Again, blaming the victim isn't the answer here. There is nothing unique to celebrities about this attack. It could just as easily be CEOs, politicians, ex's, or bullied children who were targeted. That's because security questions suck. They're either trivially answered by a public records search or some minimal social engineering (e.g. basic Facebook stalking), or they are so vague as to be immediately forgotten by the user after sign up.

E: Also, the way Apple asks for the user's iCloud password whenever anyone uses the App Store encourages lovely, mobile-friendly passwords.

Kobayashi fucked around with this message at 01:38 on Sep 4, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Crack posted:

If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.
Literally everyone is in that position.

Salt Fish
Sep 11, 2003

Cybernetic Crumb

Kobayashi posted:

Again, blaming the victim isn't the answer here. There is nothing unique to celebrities about this attack. It could just as easily be CEOs, politicians, ex's, or bullied children who were targeted. That's because security questions suck. They're either trivially answered by a public records search or some minimal social engineering (e.g. basic Facebook stalking), or they are so vague as to be immediately forgotten by the user after sign up.

You can't deny that there is at least an aspect of user culpability. By putting nudes of yourself onto the god drat internet you're taking a calculated risk for very little gain. Don't blindly trust some 3rd party cloud vendor to protect your data regardless of their brand name. Yes, the people who stole the data suck, yes, apple sucks, but you have to believe that everything on the internet sucks to be safe from data theft.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo

Salt Fish posted:

You can't deny that there is at least an aspect of user culpability. By putting nudes of yourself onto the god drat internet you're taking a calculated risk for very little gain. Don't blindly trust some 3rd party cloud vendor to protect your data regardless of their brand name. Yes, the people who stole the data suck, yes, apple sucks, but you have to believe that everything on the internet sucks to be safe from data theft.

I would maaaaaybe concede a little culpability for extremely easily brute-forced passwords, but even then I'm not sure (see my comment about mobile-friendly passwords above). Instead, I'd challenge the notion that any of the victims knowingly put their pictures "on the Internet," that they have any idea what "the cloud" is, or that they know the difference between camera roll backup (on by default) and Photo Stream (off by default). No, if I had to guess, I imagine these people thought they were sharing intimate pictures with people close to them and only people close to them. It's not like Apple goes around hanging caveat emptor all over the place -- the iCloud marketing site speaks in terms of "sharing what you want" and "peace of mind." I don't think you can blame people for trusting Apple to keep their cloud data safe, insofar as they even make the connection between photos/iMessage and the cloud to begin with.

SubG
Aug 19, 2004

It's a hard world for little things.

Salt Fish posted:

You can't deny that there is at least an aspect of user culpability. By putting nudes of yourself onto the god drat internet you're taking a calculated risk for very little gain. Don't blindly trust some 3rd party cloud vendor to protect your data regardless of their brand name. Yes, the people who stole the data suck, yes, apple sucks, but you have to believe that everything on the internet sucks to be safe from data theft.
If we broaden this from nude photos to data in general, what does this general sentiment tell us about NSA surveillance?

Crack
Apr 10, 2009

SubG posted:

Literally everyone is in that position.

I don't put my dick pics on iCloud though and if I did and cared enough about people who could possibly guess my apple id seeing my dick I'd make sure to protect those pics. I can think of some obscure personal security questions like "name of your first pet" which literally only my parents and sister knows (unless the friends I had when I was 4 remember and want to see my dick) if I didn't feel like encrypting or 2fa in addition. And if I didn't want the NSA to see I'd encrypt them.

Maybe apple et al should educate their users "if you put poo poo on the internet there is a non zero chance people will see it". But ultimately if I have a picture of my penis I really don't want anyone to see I am going to find out the best way to secure it rather than blindly trusting some random corporation and having all my security details on my facebook info, but maybe I'm a tin hat paranoid.

Honestly the thing I find most disturbing about this is apple uploading backups of users entire devices and apparently keeping them, but it's apple so I'm not really surprised. And this is an apple problem primarily and a user problem if they used terrible security, not a problem with "the system".

Also it's sad to admit but even if I did have dick pics I don't think anyone would bother to go through the effort of getting them.

Crack fucked around with this message at 02:20 on Sep 4, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Crack posted:

I don't put my dick pics on iCloud though and if I did and cared enough about people who could possibly guess my apple id seeing my dick I'd make sure to protect those pics.
That's a separate proposition. You said: `If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.' Unless the only reason you can imagine anyone would want someone else's data is if there are dicks or cooters in it.

Put in slightly different terms: No, you were right the first time. And literally everyone is in that position, because literally everyone's data is valuable to someone else. That value proposition is the internet's basic business model.

Crack
Apr 10, 2009

SubG posted:

That's a separate proposition. You said: `If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.' Unless the only reason you can imagine anyone would want someone else's data is if there are dicks or cooters in it.

Put in slightly different terms: No, you were right the first time. And literally everyone is in that position, because literally everyone's data is valuable to someone else. That value proposition is the internet's basic business model.

My point is a celebrity's data is more valuable than my data. Jennifer Lawrences nudes makes the news, I could post mine straight to reddit and nobody would care. So it's in her best interest to take more care than me to protect it if she doesn't want them out there.

And yes, stuff I believe to be valuable to others I take care in locking down, like my financial info, and wouldn't store money in some dodgy unregulated bank.

I think if I want to store something online with a terrible password that's fine, but I'm not going to blame apple when it gets lost. So it is MY fault not apples.

Crack fucked around with this message at 02:43 on Sep 4, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Crack posted:

My point is a celebrity's data is more valuable than my data.
This is the wrong value proposition. The question is whether the data, any data, is worth more to the person who wants it than that person is willing to spend to obtain it.

The guy who was wrongly accused of shooting Michael Brown, by Anonymous on twitter, presumably wasn't aware---could not have been aware---of what his data were worth to outsiders until he was doxxed, for example.

Crack posted:

I think if I want to store something online with a terrible password that's fine, but I'm not going to blame apple when it gets lost. So it is MY fault not apples.
If the people on the Titanic didn't want to drown, they should have brought their own lifeboats.

Salt Fish
Sep 11, 2003

Cybernetic Crumb

SubG posted:

That's a separate proposition. You said: `If you're in a position where you know someone might want to access your data you have a responsibility to protect it if you don't want want them to have it.' Unless the only reason you can imagine anyone would want someone else's data is if there are dicks or cooters in it.

Put in slightly different terms: No, you were right the first time. And literally everyone is in that position, because literally everyone's data is valuable to someone else. That value proposition is the internet's basic business model.

In this story the celebrities didn't do an effective risk/cost analysis and got burned. I think the public's perception of these technologies is lagging behind the reality of their poor security. This is the reason I want to criticize the celebrities for poor security practices; it's not so much to shame them or anything, its to drive public perception away from believing these services are secure. People will eventually understand that everything we post, click, view, or upload can eventually be exposed through enough effort by a malicious 3rd party. I want to drive that learning process forward and these pictures being leaked is a great chance to do that. Don't want naked pictures out there? Don't post them. It sucks that the internet can't have security guarantees but that's the world we live in.

edit: Also, the idea that we somehow drive security improvements through criticizing Apple is clearly misguided. Even if Apple is shamed into having outstanding security there are still 10,000 other services out there with laughable security. Fact is, the only person you can trust on the Internet is yourself. If you want to be safe your time is far better spent improving your own habits.

Salt Fish fucked around with this message at 02:58 on Sep 4, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Salt Fish posted:

In this story the celebrities didn't do an effective risk/cost analysis and got burned. I think the public's perception of these technologies is lagging behind the reality of their poor security. This is the reason I want to criticize the celebrities for poor security practices; it's not so much to shame them or anything, its to drive public perception away from believing these services are secure. People will eventually understand that everything we post, click, view, or upload can eventually be exposed through enough effort by a malicious 3rd party. I want to drive that learning process forward and these pictures being leaked is a great chance to do that. Don't want naked pictures out there? Don't post them. It sucks that the internet can't have security guarantees but that's the world we live in.
Do you believe that's the takeaway message from the Snowden disclosures as well?

If not before the fact, is that how the Snowden disclosures should inform decisions made today?

Salt Fish
Sep 11, 2003

Cybernetic Crumb
I don't know what you're trying to ask me but it seems both poorly phrased and highly rhetorical.

Crack
Apr 10, 2009

SubG posted:

If the people on the Titanic didn't want to drown, they should have brought their own lifeboats.

If you didn't want lung cancer you should have heeded the Smoking Kills on the packet.

What I'm really taking issue with is this:

SubG posted:

The central issue is that the data, infrastructure, and technical expertise are in one place and the nominal responsibility is in another.

I don't think redesigning the system somehow so all the responsibility lies with the service provider is the solution. I think educating the public to the risks of putting anything online, and how to best protect yourself is.

I take some care in protecting things I don't believe has value, and maybe everyone doing that is a better solution than trying to build a completely secure system.

SubG
Aug 19, 2004

It's a hard world for little things.

Salt Fish posted:

I don't know what you're trying to ask me but it seems both poorly phrased and highly rhetorical.
You said: `People will eventually understand that everything we post, click, view, or upload can eventually be exposed through enough effort by a malicious 3rd party. [...] Don't want naked pictures out there? Don't post them.'

Assuming that doesn't only apply to nude photos of celebrities, then how does this inform our understanding of the NSA surveillance activities described in the Snowden disclosures?

That is, is this a story about how Americans `didn't do an effective risk/cost analysis and got burned'? Is the takeaway that if you don't want someone to see your data---phone calls, photos, emails, texting, browsing habits, whatever---then you shouldn't make those calls, send those emails, or whatever in the first place?

Crack posted:

I don't think redesigning the system somehow so all the responsibility lies with the service provider is the solution. I think educating the public to the risks of putting anything online, and how to best protect yourself is.

I take some care in protecting things I don't believe has value, and maybe everyone doing that is a better solution than trying to build a completely secure system.
I don't think redesigning the system so that all the responsibility lies with the service provider is possible. I made that point in my first post on the subject. And `completely secure systems' are a red herring that I certainly haven't advocated and I don't believe anyone else has. And even if they were theoretically possible, it's still a false dichotomy.

Anyway, I'd direct the questions I've asked of Salt Fish to you as well. If the lesson is that we're all ultimately responsible for our own privacy and data security, how do we apply this to a problem like NSA surveillance?

Adbot
ADBOT LOVES YOU

Salt Fish
Sep 11, 2003

Cybernetic Crumb
I still don't get it. The NSA programs are wasteful programs that are using tax money to make the internet less secure. I don't see how being a savvy internet user has anything to do with their terrible programs? I feel like you're somehow trying to conflate 'How To be An Internet User' with some kind of philosophical critique of the NSA.

  • Locked thread