Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
wwb
Aug 17, 2004

aBagorn posted:

I was just about to ask something like this, but what I want to do is scrape our AD for the information without having the user have to log in (what it should do is keep out all the users except those that are in our HelpDesk and Desktop Support OU, as well as domain admins)

I tried just doing this in the webconfig, doing an

allow roles "domain\ou"
deny users "*"

but that poo poo out errors about my SQL connection string.

Disable membership and the role provider. Change authentication mode to windows. Profit.

Adbot
ADBOT LOVES YOU

Dromio
Oct 16, 2002
Sleeper

gariig posted:

Can you add
code:
<your DB context>.Log = Console.Out
to see if you have an N+1 situation.

Aha, great tip. And you're right, I did manage to end up with N+1. I'm not sure how to fix it yet, but knowing it's the problem is a great start. Thank you.

adaz
Mar 7, 2009

wwb posted:

Probably the more terrible thing is that it seems you are returning IQueryables to the view which can get really nasty really fast. Always materialize things by calling .ToArray() or .ToList(). Or map out to another object.

Is the repo newing up the EF DbContext? If so, it should be responsible for cleaning it up, probably by implementing IDisposable. I'd generally prefer passing the DbContext into the repository as a dependency, but then the infrastructure would need to deal with disposing the context.

I wasn't passing iQueryable things up luckily enough, but yeah the repo was newing up the EF Context. I was passing the DBContext into the repository as a dependency so I can either implement iDisposable on repository or do a bunch of using blocks in each of the methods that are querying EF and/or dispose of it explicitly in each method. IDisposable on repository as a whole seems like a better option.

aBagorn
Aug 26, 2004

wwb posted:

Disable membership and the role provider. Change authentication mode to windows. Profit.

Profit indeed!

E: was it really that simple? LOL

aBagorn fucked around with this message at 20:53 on Feb 1, 2012

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Dromio posted:

Aha, great tip. And you're right, I did manage to end up with N+1. I'm not sure how to fix it yet, but knowing it's the problem is a great start. Thank you.

If it's a parent-child table and only to a single table you can do
code:
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<parenttable>(c => c.childtable);
dbContext.LoadOptions = options;
That should make the LINQ-to-SQL (L2S) SQL generator perform an INNER JOIN. If you are going between three tables you are screwed using L2S. It's time for EF or writing a view in your database.

Dromio
Oct 16, 2002
Sleeper

gariig posted:

If you are going between three tables you are screwed using L2S. It's time for EF or writing a view in your database.
Thanks. I was able to prevent lazy-loading across multiple tables by crafting my initial query to return structs that contained properties from each of the tables:

code:
var results = from items in repository.Context.Table
                     select new CompositeItem
                     {
                         Table1Prop = items.Prop,
                         Table2Prop = items.Table2.Prop,
                         Table3Prop = items.Table2.Table3.Prop
                     }
This resulted in a nice sql query that pulled back all the columns I needed from all the tables. My process that took 20 minutes against production data is now running in 3 seconds.

Thanks for all the help!

Dromio fucked around with this message at 22:12 on Feb 1, 2012

PhonyMcRingRing
Jun 6, 2002

Ithaqua posted:

Also, when working with user passwords, for the love of god use a SecureString. You don't want your user's password to get interned.

Is SecureString really gonna prevent that from happening when a user is posting data to an asp.net site? The password's gonna be plain texted all over the place in the request info.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

PhonyMcRingRing posted:

Is SecureString really gonna prevent that from happening when a user is posting data to an asp.net site? The password's gonna be plain texted all over the place in the request info.

gotta use ajax to send the password one character at a time

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

PhonyMcRingRing posted:

Is SecureString really gonna prevent that from happening when a user is posting data to an asp.net site? The password's gonna be plain texted all over the place in the request info.

The password is going to be in plaintext at some point, sure. Your job as a conscientious developer is to minimize that amount of time.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Your job as a conscientious developer is to not do pointless things that may mislead future maintainers into thinking that a security hole has been dealt with. SecureString doesn't have a constructor that takes a string specifically to discourage people from thinking that's a good idea.

Dietrich
Sep 11, 2001

You could always use OpenID or something like it to do authentication without even having a password.

Sab669
Sep 24, 2009

In regards to my post the on the previous page, I guess I didn't realize there was a difference between "encrypting" and "hashing"- thought they basically meant the same thing.

But yea, I know I never want to store the users' passwords. Have them create an account, hash and salt the password (account creation date/ time is a common salt from what I hear),and then re-do the process when they log in... but I don't know how to actually do that in the language is what my issue was. I wasn't sure which has is more secure than another or how to actually handle any of it.

Thanks for the crash course, though.

e;
Found this on StackOverflow
code:
using System.Security.Cryptography;

public static string EncodePasswordToBase64(string password)
{  byte[] bytes   = Encoding.Unicode.GetBytes(password);
   byte[] inArray = HashAlgorithm.Create("SHA1").ComputeHash(bytes);
   return Convert.ToBase64String(inArray);
}
I guess I'm just so unsure of all this stuff because, again, when I was trying to do this in PHP I had the issue of not properly re-creating the salt when the user logged in. So if "cat" was their password and the salt was 123 when they created the account, maybe it was cat234 the value being generated when they tried to logged in.

Sab669 fucked around with this message at 16:43 on Feb 2, 2012

rolleyes
Nov 16, 2006

Sometimes you have to roll the hard... two?
As you've realised, whatever you select for your salt must be static (and guaranteed to be immutable) at the very least against the individual account level otherwise you lose the ability to authenticate anybody. If you're using per-account salts then don't use any fields which could ever possibly change such as their email address, or even DOB if you can foresee updates being made in cases where someone has fat-fingered it.

A reasonably safe option would be their unique record ID from the the accounts table + the timestamp of account creation (store this as a field in the accounts table).

Zhentar
Sep 28, 2003

Brilliant Master Genius
You can also just add a column for the salt.

For other recommendations, it's not a bad idea to bump the hash up to SHA-256, and then hash the hash ten thousand times in a row.

Strict 9
Jun 20, 2001

by Y Kant Ozma Post
I'm having issues with .NET deployment and would really love some help. Here is my current setup, using Visual Studio 2010, Git, and TeamCity on IIS7

1) Commit and push changes from local development server to live server

2) TeamCity triggers from Git changes and executes the steps below.

3)
code:
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore_files


Remove CMS folder from web root. Otherwise I get errors when I copy them back in later

4) Build Visual Studio .sln file using Rebuild target

5) Run MSBuild using these parameters:

/P:Configuration=Deploy /P:DeployOnBuild=True /P:DeployTarget=MSDeployPublish /P:MsDeployServiceUrl=https://XXX:8172/MsDeploy.axd /P:AllowUntrustedCertificate=True /P:MSDeployPublishMethod=WMSvc /P:CreatePackageOnPublish=True

6) Copy a bunch of other required files

code:
xcopy /I /S /Y c:\BinFiles C:\inetpub\wwwroot\Deneb\Website\bin
mkdir C:\inetpub\wwwroot\Deneb\Website\sitecore\shell\override
icacls c:\inetpub\wwwroot\deneb\website\temp /grant IIS_IUSRS:(OI)(CI)F
icacls c:\inetpub\wwwroot\deneb\website\App_Data /grant IIS_IUSRS:(OI)(CI)F
icacls c:\inetpub\wwwroot\deneb\website\App_Data /grant NETWORK SERVICE:(OI)(CI)F
c:\Sitecore641-110324\robojob.bat
------

I feel like I'm missing something here. The problem is that the whole site goes down each time when need to push a change from the website. I thought the whole point of software like TeamCity was that it didn't. Any ideas on how to fix my configuration?

boo_radley
Dec 30, 2005

Politeness costs nothing

Strict 9 posted:

I'm having issues with .NET deployment and would really love some help. Here is my current setup, using Visual Studio 2010, Git, and TeamCity on IIS7

1) Commit and push changes from local development server to live server

2) TeamCity triggers from Git changes and executes the steps below.

3)
code:
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore_files



Unrelated to your question, but we're evaluating Sitecore soon. What's your take on it?

rolleyes
Nov 16, 2006

Sometimes you have to roll the hard... two?

Zhentar posted:

You can also just add a column for the salt.

Given the purpose of the hash is to guard against password theft if someone breaches your database security, using a salt but storing it alongside the hash in the accounts table is a bit like installing a second lock on your door and leaving the key for the new lock hanging by a string from the exterior door handle.

This is one of those cases where 'security through obscurity' is actually kind of valid, because then any attacker needs not only to breach your database but gain access to your code as well in order to figure out the salt.

rolleyes fucked around with this message at 17:53 on Feb 2, 2012

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Even with the salts and hashed passwords you have to brute-force each password individually rather than just constructing a rainbow table. If brute forcing each password is an actual option then you chose the wrong hashing algorithm.

Dromio
Oct 16, 2002
Sleeper

Strict 9 posted:


3)
code:
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore
rmdir /Q /S c:\inetpub\wwwroot\deneb\website\sitecore_files


Remove CMS folder from web root. Otherwise I get errors when I copy them back in later

This is why your site goes down. It can't be up while the files have been removed like this. It really has nothing to do with TeamCity.

No matter what, an IIS site is going to at least "hiccup" when the app pool has to recycle because of new binaries or config changes. You can minimize it though. Here's my process:

1. Copy my files to the server.
2. Unzip the files to a temporary folder parallel to the site folder (inetpub/wwwroot/sitenew).
3. Stop the app pool for the site.
4. Rename the original site folder (inetpub/wwwroot/site -> siteold)
5. Rename the temporary folder to the actual site folder name. (sitenew -> site)
6. Start the app pool
7. Clean up by deleting the copied folder (delete siteold)

This seems to minimize the downtime on ours. It could be improved if you identify the changes and determine if they actually require an app pool recycle. For example, a changed image or js file or plain html shouldn't require a recycle, so if only assets change skip the process above and just copy in the new assets. I haven't worried that much about it.

rolleyes
Nov 16, 2006

Sometimes you have to roll the hard... two?

Plorkyeran posted:

Even with the salts and hashed passwords you have to brute-force each password individually rather than just constructing a rainbow table. If brute forcing each password is an actual option then you chose the wrong hashing algorithm.

This is a fair point. It still feels wrong to me to be storing the salt in plain sight though.

uXs
May 3, 2005

Mark it zero!

rolleyes posted:

This is a fair point. It still feels wrong to me to be storing the salt in plain sight though.

Not really. Salting is done to make it much more difficult to make rainbow tables, it doesn't matter if it's known or not.

Read this: http://stackoverflow.com/questions/213380/the-necessity-of-hiding-the-salt-for-a-hash/215165#215165

Also, making your own security stuff is very, very hard. Always try to use something that already exists.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
There's two main perspectives I've seen:
  1. Adding obscurity to an already secure system can be considered a form of defense in depth. Dynamically generating salts based on some obscure algorithm does not provide much security by itself, but it'll still slow down an attacker, so why not?
  2. Don't do unnecessary poo poo that hides where the actual security comes from. Writing secure software is hard enough when done as clearly as possible, and that extra obscurity is as likely to confuse a future maintainer as it is an attacker.

#2 seems to be far more popular among people who are actually qualified to do things with cryptography.

Strict 9
Jun 20, 2001

by Y Kant Ozma Post

boo_radley posted:

Unrelated to your question, but we're evaluating Sitecore soon. What's your take on it?

Been using it for about 4 years now, and honestly I love it. Very powerful system.

Dromio posted:

This is why your site goes down. It can't be up while the files have been removed like this. It really has nothing to do with TeamCity.

No matter what, an IIS site is going to at least "hiccup" when the app pool has to recycle because of new binaries or config changes. You can minimize it though. Here's my process:

1. Copy my files to the server.
2. Unzip the files to a temporary folder parallel to the site folder (inetpub/wwwroot/sitenew).
3. Stop the app pool for the site.
4. Rename the original site folder (inetpub/wwwroot/site -> siteold)
5. Rename the temporary folder to the actual site folder name. (sitenew -> site)
6. Start the app pool
7. Clean up by deleting the copied folder (delete siteold)

This seems to minimize the downtime on ours. It could be improved if you identify the changes and determine if they actually require an app pool recycle. For example, a changed image or js file or plain html shouldn't require a recycle, so if only assets change skip the process above and just copy in the new assets. I haven't worried that much about it.

Hmm. That's an idea. So all the copying takes place before the actual transition. Do you do all this through TeamCity? Like batch files for the folders, command line IIS commands for the app pool?

Dromio
Oct 16, 2002
Sleeper

Strict 9 posted:

Been using it for about 4 years now, and honestly I love it. Very powerful system.


Hmm. That's an idea. So all the copying takes place before the actual transition. Do you do all this through TeamCity? Like batch files for the folders, command line IIS commands for the app pool?

I write my build/deploy script outside of TeamCity. I'm using MSBuild, but it could be anything like Rake or Nant or even dumb batch scripts. It's important that it's outside of the CI server so you can run it anytime/anywhere you want. Then just setup TeamCity to call that same build script.

Strict 9
Jun 20, 2001

by Y Kant Ozma Post

Dromio posted:

I write my build/deploy script outside of TeamCity. I'm using MSBuild, but it could be anything like Rake or Nant or even dumb batch scripts. It's important that it's outside of the CI server so you can run it anytime/anywhere you want. Then just setup TeamCity to call that same build script.

Gotcha. So when you say MSBuild, you're referring to the XML file where you can tell MSBuild to delete, copy files, etc? I'm brand new to .NET deployment so I'm still trying to understand that.

Dr Monkeysee
Oct 11, 2002

just a fox like a hundred thousand others
Nap Ghost

Ithaqua posted:

Here's the proper way to implement IDisposable, right off of MSDN:

This is a little late but I'd like to point out that this pattern on MSDN is the proper way to implement IDisposable if you're wrapping an unmanaged resource (like the IntPtr in the example).

If all you're doing is wrapping another managed IDisposable implementing IDisposable yourself is trivial. You just delegate Dispose() to the IDisposable member you wrapped. This is a point MSDN has always oddly underemphasized.

If all adaz is doing is storing a DBContext then that's all he needs to do.

Dr Monkeysee fucked around with this message at 20:58 on Feb 2, 2012

Mr. Crow
May 22, 2008

Snap City mayor for life
Probably stupid question, I'm trying to update/replace a .resx during runtime and am having problems getting it to create it at the correct file path. Here is the relevant code.

code:
using (System.Resources.IResourceReader resXr = new System.Resources.ResXResourceReader("Resources.resx"))
using (System.Resources.IResourceWriter resX = new System.Resources.ResXResourceWriter(@".\Resources.resx"))
{
	foreach (DictionaryEntry item in resXr)
	{
		resX.AddResource(item.Key.ToString(), asn.Write().ToArray());
	}

	resX.Generate();
}

string[] unmatchedElements;
CodeDomProvider codeProvider = new Microsoft.CSharp.CSharpCodeProvider();
CodeCompileUnit code = System.Resources.Tools.StronglyTypedResourceBuilder.Create("Resources.resx",
					"Resources", "Properties", codeProvider, true, out unmatchedElements);

//System.Text.Encoding {System.Text.UTF8Encoding}
StreamWriter streamWriter = new StreamWriter("Resources.Designer.cs");
codeProvider.GenerateCodeFromCompileUnit(code, streamWriter, new CodeGeneratorOptions());
streamWriter.Close();
The problem is it only writes to .\Debug\, I can't get it to write to the actual file path. Anything jumping out at someone?

Dietrich
Sep 11, 2001

Plorkyeran posted:

There's two main perspectives I've seen:
  1. Adding obscurity to an already secure system can be considered a form of defense in depth. Dynamically generating salts based on some obscure algorithm does not provide much security by itself, but it'll still slow down an attacker, so why not?
  2. Don't do unnecessary poo poo that hides where the actual security comes from. Writing secure software is hard enough when done as clearly as possible, and that extra obscurity is as likely to confuse a future maintainer as it is an attacker.

#2 seems to be far more popular among people who are actually qualified to do things with cryptography.

Using an individual, pseudo-random and obvious salt for users means that a hacker will need to compile their own rainbow table in order to crack any given user's password. If they want to crack a second user, they will need to compile a second rainbow table.

Using an individual, pseudo-random and hidden salt for users means that a hacker will need to compile several rainbow tables in order to crack any given user's password. The number of rainbow tables required could range anywhere from one to five hundred depending on the complexity of the salt generator. You don't have the slightest clue how many it will take, and you'll have to check each table for a hash match and try to use it on the target system to validate it. Every failed attempt can be used to potentially identify the fact that you are trying to hack the system.

You're a hacker. You've got several users tables to try to crack. They've each got a govpalin@yahoo.com user on them. One has a salt listed. The other does not. Which one do you crack?

The only trade-off is that instead of the salt being listed directly on the table, the algorithm to compute the salt is listed directly in the code. This is not an expensive trade off for making your users table a less tempting target.

epswing
Nov 4, 2003

Soiled Meat

Monkeyseesaw posted:

If all you're doing is wrapping another managed IDisposable implementing IDisposable yourself is trivial. You just delegate Dispose() to the IDisposable member you wrapped. This is a point MSDN has always oddly underemphasized.

Can you show this in an example?

Mr. Crow
May 22, 2008

Snap City mayor for life

epswing posted:

Can you show this in an example?

this?

code:
        protected virtual void Dispose(bool disposing)
        {
            _disposableClass.Dispose()
        }

The Gripper
Sep 14, 2004
i am winner

Plorkyeran posted:

There's two main perspectives I've seen:
  1. Adding obscurity to an already secure system can be considered a form of defense in depth. Dynamically generating salts based on some obscure algorithm does not provide much security by itself, but it'll still slow down an attacker, so why not?
  2. Don't do unnecessary poo poo that hides where the actual security comes from. Writing secure software is hard enough when done as clearly as possible, and that extra obscurity is as likely to confuse a future maintainer as it is an attacker.

#2 seems to be far more popular among people who are actually qualified to do things with cryptography.
I personally don't think using some crazyass algorithm for creating salts is necessary, but at the same time I'm not a big fan of storing salt with the data (or at least, not storing salt on it's own in an obvious way). The way we had it implemented was (as people above suggested) to find some immutable data for each user and use that as the salt. A combination of ID and Timestamp, for example.

At least that way if the database is compromised an attacker would need to determine the salt from known data, which would be trivial if we had used a single immutable field (e.g. attacker creates an account, dumps the database to find his hash, tries combinations of id+password, password+id, timestamp+password, password+timestamp until he finds which immutable field is the salt).

Also, people (by people I mean novice devs and devs that haven't done their research) don't seem to realize that if you use a single salt and the salt is compromised, the attacker literally has the same data as if the password was unsalted. Even if the attacker gets all hashes with salts, he would need to create a rainbow table for each user individually, which is a huge task that might not be worth the effort if the attack was to steal personal details/fraud/identity theft rather than to target one specific user.

The Gripper fucked around with this message at 21:16 on Feb 2, 2012

Zhentar
Sep 28, 2003

Brilliant Master Genius

Dietrich posted:

Using an individual, pseudo-random and obvious salt for users means that a hacker will need to compile their own rainbow table in order to crack any given user's password.

If you're working with a unique salt, there's no point to a rainbow table. You just brute force it straight up.

Dietrich posted:

Using an individual, pseudo-random and hidden salt for users means that a hacker will need to compile several rainbow tables in order to crack any given user's password. The number of rainbow tables required could range anywhere from one to five hundred depending on the complexity of the salt generator. You don't have the slightest clue how many it will take, and you'll have to check each table for a hash match and try to use it on the target system to validate it. Every failed attempt can be used to potentially identify the fact that you are trying to hack the system.

I don't need to touch the target system to figure out if I've figured out the right salt. I can just take a list of the top 100 (poo poo, even just the top 10) most common passwords, test it against every user in the database. If I don't get a significant number of hash matches, I haven't figured out the salt yet.

Dietrich posted:

You're a hacker. You've got several users tables to try to crack. They've each got a govpalin@yahoo.com user on them. One has a salt listed. The other does not. Which one do you crack?

A)I probably would've stopped after I got the first users table...
and
B)Both. It's not like this poo poo is hard, why not?

Dietrich
Sep 11, 2001

The Gripper posted:

I personally don't think using some crazyass algorithm for creating salts is necessary, but at the same time I'm not a big fan of storing salt with the data (or at least, not storing salt on it's own in an obvious way). The way we had it implemented was (as people above suggested) to find some immutable data for each user and use that as the salt. A combination of ID and Timestamp, for example.

At least that way if the database is compromised an attacker would need to determine the salt from known data, which would be trivial if we had used a single immutable field (e.g. attacker creates an account, dumps the database to find his hash, tries combinations of id+password, password+id, timestamp+password, password+timestamp until he finds which immutable field is the salt).

Also, people (by people I mean novice devs and devs that haven't done their research) don't seem to realize that if you use a single salt and the salt is compromised, the attacker literally has the same data as if the password was unsalted. Even if the attacker gets all hashes with salts, he would need to create a rainbow table for each user individually, which is a huge task that might not be worth the effort if the attack was to steal personal details/fraud/identity theft rather than to target one specific user.

Having an unique salt per user is absolutely required. If you only have one salt then it's a simple matter of creating a user with a known password, reverse engineering possible salts that result in the resulting hash, and once it's been discovered, generating a rainbow table to have every user's password. The cost of generating the cracked users list is the cost of cracking the salt plus the cost of generating one rainbow table.

If you've got a unique salt per user but it's stored on the table, then the cost of generating the cracked users list is the cost of generating n rainbow tables, where n is your number of users.

If you've got a unique salt per user that is calculated based on data in the table, then the cost of generating a cracked users list is the cost of cracking the salt algorithm plus the cost of generating n rainbow tables. The more complicated the salt algorithm, the more expensive this becomes, but the harder it becomes for you to maintain the application.

At any rate, the third option is demonstrably more secure, and you have complete control over the amount of additional security you want to heap on the system.

Zhentar
Sep 28, 2003

Brilliant Master Genius

The Gripper posted:

At least that way if the database is compromised an attacker would need to determine the salt from known data, which would be trivial if we had used a single immutable field (e.g. attacker creates an account, dumps the database to find his hash, tries combinations of id+password, password+id, timestamp+password, password+timestamp until he finds which immutable field is the salt).

My god, there might be dozens or even hundreds of possible combinations! And I can only test 50 million hashes/second on this machine. Christ, I hope I don't get too bored waiting for the answer.

Dietrich
Sep 11, 2001

Zhentar posted:

If you're working with a unique salt, there's no point to a rainbow table. You just brute force it straight up.

Brute forcing is pretty trivial to prevent with a e-mail to their registered account they must get to unlock their account after a fixed number of failed login attempts. If they've already compromised the target's email account then they can just reset their password anyway.

quote:

I don't need to touch the target system to figure out if I've figured out the right salt. I can just take a list of the top 100 (poo poo, even just the top 10) most common passwords, test it against every user in the database. If I don't get a significant number of hash matches, I haven't figured out the salt yet.

That's a clever way to get around the problem and one I hadn't really considered. I don't think that this really means 'So just bugger the whole thing and write the salt on the users table', though.

quote:

A)I probably would've stopped after I got the first users table...
and
B)Both. It's not like this poo poo is hard, why not?

The point isn't just to make it harder, the point is to make it more time consuming as well. The longer it takes for them to crack it, the longer you have as a responsible admin to discover the security breach and notify your users that their passwords may have been compromised.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Dietrich posted:

Brute forcing is pretty trivial to prevent with a e-mail to their registered account they must get to unlock their account after a fixed number of failed login attempts. If they've already compromised the target's email account then they can just reset their password anyway.

A "Rainbow Table" is a pre-computed database that allows efficient look-up of the hash input if it contains a given hash output. The net process of building the database and then looking up a single value in is not efficient; it is the "pre-computed" part that makes it worthwhile. A unique salt prevents the pre-computed part. If there is a unique salt, then you just use the good, old fashioned trial and error method.


Dietrich posted:

The point isn't just to make it harder, the point is to make it more time consuming as well. The longer it takes for them to crack it, the longer you have as a responsible admin to discover the security breach and notify your users that their passwords may have been compromised.

The point is that you haven't made it significantly more time consuming. You've made yourself feel better by trying to pull some little trick in the naive hope that it will take a hacker longer than 15 minutes to figure out what you've done.

Meanwhile...
code:
		public static byte[] TimeConsumingHashFunction(byte[] input)
		{
			HashAlgorithm hash = HashAlgorithm.Create("SHA256");
			byte[] output = hash.ComputeHash(input);
			for (int x = 1; x < 50000; x++)
			{
				output = hash.ComputeHash(output);
			}
			return output;
		}
I've just made figuring out each password take 50,000 times longer. No tricks, no trying to sneak away extra secrets and then hoping the hackers won't be clever. It will take just as long to calculate each hash whether or not they have my source code. Unless I made some stupid mistake in there, which is usually what happens when you try to write your own security code.

It still wouldn't be enough to protect against people using a password in a common passwords dictionary, but there's not really anything you can do about that (well, maybe you could load up your own common password dictionary and rejected any password in it...)

Nereidum
Apr 26, 2008
Here's a good article on how to safely store a password.

As that article says, using BCrypt is a good idea. You can get a good .NET implementation of BCrypt here: http://bcrypt.codeplex.com/

To hash a password for storage, you do this:

code:
string hash = BCrypt.Net.BCrypt.HashPassword("SomePassword", 10);
The value 10 is the work factor, which basically controls how many iterations the algorithm runs. Each increase of one in the work factor doubles the number of iterations (and thus the time it takes to hash a password). You can omit the load factor, in which case it currently defaults to using 10 (which takes a little over 100 ms to hash that password on the computer I'm using now).

You'll get back a string that looks like this: $2a$10$owgJ6i8h5v8ptcvno7aSDugQ2B5aSs8WCU.Xy9sl8Rkendpt1WtHW

This string includes the hash, the random salt used to generate it, and the work factor (the 10 near the beginning, in this case). Since the work factor is stored within the output, you're free to increase it later on once hardware catches up and the algorithm isn't slow enough anymore, and it'll still be able to verify old hashes. (Though it would probably be a good idea to recompute the hash and store it the next time the user logs in successfully, if you do this.)

You'd store the whole hash string above in your database. Then to verify a password, you just do this:

code:
if (BCrypt.Net.BCrypt.Verify(inputPassword, storedHash))
{
    // The password is correct
}

biznatchio
Mar 31, 2001


Buglord

Mr. Crow posted:

this?

code:
        protected virtual void Dispose(bool disposing)
        {
            _disposableClass.Dispose()
        }

You missed one detail:

code:
        protected virtual void Dispose(bool disposing)
        {
            if (disposing)
            {
                _disposableClass.Dispose();
            }
        }
Because if (disposing == false), you're finalizing, not disposing; and there's no need to propogate Dispose() to your fields, since if any of them need finalization, they're going to be sitting right there on the finalization queue alongside you (and in fact they might have already been finalized, since finalization order is non-deterministic with one exception that's not really relevant to this discussion).

In fact, pretty much the only thing you should ever do when (disposing == false) is clean up any unmanaged resources that your class is directly responsible for.

And by directly responsible I don't mean that you have a SqlConnection field that you need to close, since SqlConnection itself is the one directly responsible for the connection and can finalize itself without your help. I mean if you happen to have an IntPtr that represents an OS handle that needs to be closed or a pointer to a block of unmanaged memory you need to free (and if that's the case why aren't you using a SafeHandle?).

(It's also acceptable to use finalization to, say, implement object instance pooling, or to deregister an instance from some centralized tracking class; but really those things are more properly done during normal disposal, and having them happen during finalization gives your code a very bad smell.)

biznatchio fucked around with this message at 01:54 on Feb 3, 2012

Dromio
Oct 16, 2002
Sleeper

Strict 9 posted:

Gotcha. So when you say MSBuild, you're referring to the XML file where you can tell MSBuild to delete, copy files, etc? I'm brand new to .NET deployment so I'm still trying to understand that.

Exactly. I'm really only using it because the build evolved that way. At first it was just compiling a .sln and copying the output. But as the project grew I added targets to do much more.

I don't love the syntax though. If I were to start from scratch I'd probably choose psake or something less -- XML-y.

Adbot
ADBOT LOVES YOU

Dietrich
Sep 11, 2001

Zhentar posted:

A "Rainbow Table" is a pre-computed database that allows efficient look-up of the hash input if it contains a given hash output. The net process of building the database and then looking up a single value in is not efficient; it is the "pre-computed" part that makes it worthwhile. A unique salt prevents the pre-computed part. If there is a unique salt, then you just use the good, old fashioned trial and error method.


The point is that you haven't made it significantly more time consuming. You've made yourself feel better by trying to pull some little trick in the naive hope that it will take a hacker longer than 15 minutes to figure out what you've done.

Meanwhile...
code:
		public static byte[] TimeConsumingHashFunction(byte[] input)
		{
			HashAlgorithm hash = HashAlgorithm.Create("SHA256");
			byte[] output = hash.ComputeHash(input);
			for (int x = 1; x < 50000; x++)
			{
				output = hash.ComputeHash(output);
			}
			return output;
		}
I've just made figuring out each password take 50,000 times longer. No tricks, no trying to sneak away extra secrets and then hoping the hackers won't be clever. It will take just as long to calculate each hash whether or not they have my source code. Unless I made some stupid mistake in there, which is usually what happens when you try to write your own security code.

It still wouldn't be enough to protect against people using a password in a common passwords dictionary, but there's not really anything you can do about that (well, maybe you could load up your own common password dictionary and rejected any password in it...)

I'm pretty sure you just created one hell of a birthday paradox.

  • Locked thread