|
Mortanis posted:What's the feeling about ASP.Net MVC? I'm coming from a ColdFusion background, and trying to move over to ASP.Net. A few of my friends are suggesting using ASP.Net MVC 2 instead of straight ASP.Net, and I seem to have ignited a pretty strongly opinioned debate amongst others I know that are ASP.Net fanatics. Since I'm coming from ColdFusion, both are alien to me, and I'm trying to get the pros/cons of each. I've worked in both and find MVC to be a pretty significant improvement. All the new projects we're working on going forward are MVC unless we have a really really good reason to use WebForms (like the app would be significantly easier with a controls library or something). The MVC programming model maps much closer to how HTTP actually works so if you're already comfortable in web app programming it's a pretty shallow learning curve and you're not wrestling with as many opaque abstractions 8 levels removed from the actual HTML and request/response interaction. Also it's a lot easier to control your URL structures and even do REST-style interfaces (though .NET 4.0 is supposed to have full routing support for WebForms so that advantage is diminished somewhat). Having said that depending on what you're doing it's not going to be faster. MVC is still pretty immature compared to WebForms (or other web MVC frameworks for that matter) so there's more DIY and less 3rd-party support. If you're looking for quick and dirty WebForms is still a pretty good choice, especially if they're CRUD apps. Be prepared if you go with ASP.NET MVC you'll be made fun of by Rails developers for being such a late comer to the pattern
|
# ¿ Feb 12, 2010 18:34 |
|
|
# ¿ Apr 29, 2024 08:49 |
|
Mortanis posted:Next question: How are those of you using ASP.Net MVC dealing with designers? Our current shop is all over the place (part of what I'm trying to fix), and designers are used to simply editing files and FTPing them back to the server. They're used to using their OS and editor of choice. They're used to editing live sites, and ultimately being autonomous, only needing programmers for the technical aspects. What platform are you using right now? I can't imagine WebForms would be more amenable to designer independence than MVC what with all its server controls. The designer wouldn't even be able to see what the HTML would look like. I agree with everyone else that having designers edit the site directly without source control is a bad idea but MVC at least makes this easier because the views (if you do it right) should literally just be raw HTML and inline code expressions that output the dynamic content. As long as they can work around the <% %> tags it's just an HTML document.
|
# ¿ Feb 17, 2010 18:23 |
|
BizarroAzrael posted:Ah, always the little things. Thanks. I suppose I could have just set it to anything and set the while to be for as long as it remained so. Just remember that local variables are never initialized by .NET, you have to do it yourself. That can be a little confusing at times since member fields *are* initialized automatically.
|
# ¿ Mar 1, 2010 22:41 |
|
Ashex posted:And I can't really debug since my department doesn't have a license for Visual Studio, I built this whole thing using Notepad++ and assistance from a goon a couple years ago. I did put in a purchase request for VS but it's up to my manager to approve it, and considering I am the one web dev (using that loosely) in my group, it's a toss up. If you can offer advice on how to go about debugging without VS, I would greatly appreciate it. Visual Studio Express is free and I'm pretty sure the Web Developer version would work for what you're doing. The Express editions are stripped down but they have the basic IDE, compiler, and debugger and I *think* Web Dev works with IIS. I wouldn't build a production app with it but it's gonna be a lot friendlier than Notepad++. http://www.microsoft.com/express/Web/
|
# ¿ Mar 14, 2010 07:38 |
|
Does your data contain any culturally relevant data? 'Cuz the XmlWriter isn't going to do any culture conversion of anything automatically, it just builds the XML you tell it to build. The Culture is only relevant if you're using methods that actually do culture-aware conversions or lookups (things like DateTime ToString methods or resx key lookups) and the fact that you want the invariant culture tells me you're not.
|
# ¿ Mar 19, 2010 23:10 |
|
Horse Cock Johnson posted:Is there no way to get a virtual path just based on a bunch of route values from your app's route table? What I want to be able to do is something like this: You can't do that because all the smarts behind turning route values into a virtual path are contained in HttpContext. UrlHelper doesn't actually do very much except wrap that functionality around a route-friendly API. That said you can mock the right pieces depending on what you're trying to do. Specifically HttpContextBase.Request.AppRelativeCurrentExecutionFilePath can be mocked to return whatever virtual path you want in a unit-test scenario
|
# ¿ Aug 20, 2010 18:26 |
|
Orzo posted:I'm kind of curious about this, as I'm using getting quite used to using var regularly. I haven't done any research on how the keyword works, but I thought it was just syntactic sugar and would ultimately compile to the same IL. Should I be worried? It is just syntactic sugar but I'm guessing the black magic could occur in that the compiler will determine the type of var to be as specific as possible, which could bite you for frameworks that are assuming you're passing around a more general interface or abstract class reference.
|
# ¿ Aug 31, 2010 17:30 |
|
ljw1004 posted:That will never be a problem... imagine: Yeah probably, I was just speculating on what wwb may have been referring to, I've never hit a problem myself. At the very least it's useful to know how the compiler resolves var and that it may not always resolve the way you think it would.
|
# ¿ Aug 31, 2010 19:24 |
|
Mustach posted:It always resolves to the compile-time type of the expression on the right-hand side of the =. var's not magic, dudes. Yeah. I know. And that can cause confusion sometimes if you're not reading too closely and expecting your reference to end up with a more general type. Like if you had: IEnumerable<Foo> myVariable = AListOfFoos(); and you changed it to: var myVariable = AListOfFoos(); it will compile to different code. You'll suddenly find that myVariable exposes List<T>'s API and you may not have intended that. I've never run into a case where this is a problem but I presume this kind of subtle type-shifting is what wwb may have meant when he talked about how var can trip you up sometimes. Dr Monkeysee fucked around with this message at 20:51 on Sep 1, 2010 |
# ¿ Sep 1, 2010 20:47 |
|
Ugg boots posted:I have a question about threading and Singletons. I noticed this pattern in our C# code, and we were trying to determine whether or not it would actually create exactly one AccessManager in the pathological cases: I know you already posted a link but the short answer is .NET guarantees that static initialization is thread safe. It's impossible to have two threads execute static initialization (this is obviously not true for static members in general, hence the need for locking in your Instance accessor).
|
# ¿ Oct 7, 2010 17:26 |
|
PDP-1 posted:Yeah, I was planning on using XML at least during development since the resulting files would be human readable for debugging. I just took a look at BinaryFormatter and that looks like its set up to do exactly what I need to do. One thing to watch out for: the binary serialization format is considered an "implementation detail" of the CLR and can therefore change from version to version. Whether it has or not I don't know but if you use binary serialization as a persistence format it's possible that files you wrote out in, say, .NET 2.0 won't be readable if you upgrade your application to .NET 4.0.
|
# ¿ Oct 19, 2010 19:07 |
|
Orzo posted:Ouch, I didn't know that either. What is the solution for people that have been doing this? Does the API offer some sort of converter at least? Honestly I'm not sure. I do know the reason why is binary serialization was more intended for marshaling objects across appdomain/process boundaries and remoting rather than persistence. This stack overflow thread provides some insight into the issues: http://stackoverflow.com/questions/203694/stability-of-net-serialization-across-different-framework-versions Dr Monkeysee fucked around with this message at 05:55 on Oct 20, 2010 |
# ¿ Oct 20, 2010 05:52 |
|
Madox posted:Well, I'd like to avoid doing the lookup twice, since ContainsKey() must have to do a lookup unless its optimized somehow. I'll have to try some tests. You're worrying about this way too much. If your code has any performance problems I guarantee it's not because of little stuff like this. Throwing exceptions to control application flow, however, can definitely have a non-trivial affect on performance.
|
# ¿ Oct 22, 2010 19:06 |
|
I've found the problem with == in C# is it's ambiguous when it's a reference comparison and when it's a value comparison. The framework types often don't make it very obvious, whereas Equals is usually pretty clear. You can probably do fine just assuming "reference types do a reference comparison and value types do a value comparison" but I've run into enough exceptions that I'm gun-shy at this point. In practice I find that using Equals on strings is almost always required though because I nearly always have to specify a StringComparison value different from the default, which the == operator doesn't allow you to do.
|
# ¿ Dec 2, 2010 19:41 |
|
Chuu posted:Does anyone know of any references or sample code that could help? Even better, is there an easy way to switch between app domains dynamically, i.e. the test fixtures could just save the current domain, create a new one, switch to it, and after the tests run destroy it? This blog post may help: http://www.paraesthesia.com/archive/2010/06/17/unit-testing-an-asp.net-virtualpathprovider.aspx His unit tests are creating a temporary app domain and running the test within that domain, then tearing it down after the test. I've used this technique on the rare unit test where I can't escape ASP.NET's hosting environment but even without the ASP.NET parts the principle should be the same.
|
# ¿ Dec 22, 2010 18:52 |
|
PhonyMcRingRing posted:Basically: God drat it Microsoft. I've never had a good experience with any of the AJAX server controls. They coded in some *massive* assumptions about the state of the ASP.NET pipeline. In your case apparently they assume no custom VirtualPathProviders had been implemented, in my case they assume that no url-rewriting had occured. It was a mess.
|
# ¿ Jan 27, 2011 23:34 |
|
^^^ maybe the community will do something cool with thisscarymonkey posted:Looks like Redgate is gonna charge $35 for the next version of .NET Reflector: Redgate hasn't added a single "feature" to Reflector that was actually useful. The only thing Reflector needs is new CLR-version support. I think it's bullshit they're going to start charging for this while pushing all their value-add plugin nonsense. Why isn't there an open-source solution for this? I doubt Lutz Roeder is the only guy that can figure out how to decompile and translate .NET assemblies.
|
# ¿ Feb 2, 2011 22:01 |
|
Nurbs posted:I have an asp.net mvc2 website that I typically set up two different actions for 1 route, one for GET and one for POST. The model-binding works exactly the same way for GET and POST actions so a GET action can use a richer object as a parameter instead of a series of params matching your querystring inputs. In fact if your GET and POST action accept the same inputs and do the same thing (which I'm assuming from your comment that you set up both actions for "the same route") you can unify them into one action since the HTTP verb has no impact on how the incoming request is bound to your action parameters. Obviously this means you can still use ModelState.IsValid as a hook into input validation.
|
# ¿ Feb 10, 2011 18:54 |
|
more falafel please posted:Is there any way to give the get reference semantics? Like public Vector2& Vec { get { return vec; } set { vec = value; } }, except all C# style? Coincidentally Eric Lippert wrote a post today about this very case and why it's not currently in C#: http://blogs.msdn.com/b/ericlippert/archive/2011/06/23/ref-returns-and-ref-locals.aspx Turns out the CLR actually can support ref return types among other weird ref cases but C# enforces much stricter value-type semantics.
|
# ¿ Jun 23, 2011 19:52 |
|
Mr.Hotkeys posted:On this topic, why does Microsoft have all these strict guidelines for when to use structures but break them all the time? Not that the end result doesn't necessarily make sense, but it seems really dumb to have these rules in the first place if even you aren't going to follow them. I haven't worked with XNA much but their use of structs seems like they got a bunch of C++ devs designing it. There's a lot of PODs in XNA that they implemented as structs but as people in this thread are rapidly discovering value-type semantics in .NET are a bit more subtle then just "POD == struct type". I can't think of a good reason why the Vector types would need to be structs unless there's some weird under-the-hood optimization they gain when it's all sent to the GPU or something. For whatever reason XNA is a lot more struct heavy than the .NET framework at large, where they typically do stick to the guidelines fairly well.
|
# ¿ Jun 25, 2011 00:38 |
|
Hoborg posted:Had they used reference types then things get complicated quickly, as well as slowed down: D3D's functions expect specific data type layouts which you cannot get with classes, a class-to-struct conversion would be too expensive given you have to pump several million Vector3 instances into the GPU for a single frame now. Yeah that makes sense, and occurred to me literally as I was writing my last post. ljw1004 posted:I think the C#/VB perspective is that mutable structs are evil. (Do a search for "mutable struct" and "eric lippert" for zillions of pages telling you why...) Yeah believe me I've read quite a bit on implementing value types in .NET because it's a bit of a high-wire act to get the semantics right. Which is why I found it interesting that XNA seemed so struct heavy but given the constraints in pushing data to the directx pipeline it makes some sense. I mostly work in ASP.NET and I can probably count on one hand the number of times I've actually needed to implement a struct of my own. It just doesn't come up very often in most application domains, as opposed to C++ where struct is basically just an annotation meaning "this object is just a data bag, go wild".
|
# ¿ Jun 25, 2011 03:13 |
|
It really depends on what you're using them for. Avoiding mutable structs is usually a good reason if your intent is to have other devs use them, since the value semantics can get confusing if you can both pass something by value AND change the value inline; it's easy to write code that makes you think you're changing one instance when you're actually changing another, or situations where a seemingly innocent operation ends up mutating something unexpectedly. Again Eric Lippert has some good writings in this area. The point of a struct is to provide a type that uses value semantics instead of reference semantics which usually means you have to define your own comparison operators and hash functions which can be non-trivial and if you get them wrong may cause your type to behave in unexpected ways. Again this is really only a major concern if your intent is to provide the strcut for general use; if it's only use in your own projects then knock yourself out. Having said that, value semantics in my experience don't come up very often beyond the built-in types, though obviously this is dependent on the problem domain. Newbies to .NET, especially people from a non-managed code background often want to favor structs over classes to "save memory" since they read somewhere that structs are allocated on the stack while classes are allocated on the heap. Ignoring the fact that in a GC'ed language this is a distinction you rarely have to care about it's also only sort of true. An int may be allocated on the stack but an array of ints still ends up in the managed heap because Array is a reference type. Similarly an int that is a member field of a class. It's so exceedingly rare a value type will be defined whose usage doesn't include appearing in collections or as data fields the memory usage argument is essentially irrelevant.
|
# ¿ Jun 26, 2011 07:32 |
|
rolleyes posted:I definitely see the reason for the "mutable structs are evil" guideline, although maybe "evil" is a bit strong. It's all too easy to treat them as a class and forget that you're working with a local copy when passing them around. Eric Lippert generally frames it as structs don't really own their own memory like classes do and that can lead to weird cases if a struct is mutable.. You can write code that transforms "this" in a mutable struct and literally makes it a new instance, where you can't do that with classes. That said obviously it's a rule of thumb and mutable structs can be found in the .NET framework. They're just not very common. That's interesting that the beginner book emphasizes the memory aspect of it, as the advanced stuff seems to explicitly downplay it in favor of focusing on the semantics of structs vs classes. It's probably the simplest and most obvious differentiating feature if you want to get the point across quickly.
|
# ¿ Jun 27, 2011 05:58 |
|
^^ oh yeah and dynamic which is pretty cool but like the generic covariance etc only comes up if you really need it.Spasms posted:I was just about to sit down and read C# 3.0 Unleashed: With the .NET Framework 3.5, but realized that this book is probably a bit out of date with the release of 4.0. Are the changes between the releases significant enough that I should look for a newer book? I'm looking to learn C#, ASP.NET and VB.NET, so please let me know the most recent books that I should be looking to pick up. If I recall correctly the main addition in C# 4 was optional parameters and named parameters which go hand-in-hand and you can wrap your head around in about 10 seconds. They also added generic covariance and contravariance but nobody wraps their head around that unless they have to.
|
# ¿ Aug 14, 2011 09:51 |
|
Ithaqua posted:There's also the parallel task library and about a million other things of varying levels of usefulness. Well yeah but the TPL isn't an addition to C#, it's an addition to the .NET framework. Spasms was asking specifically about language features. If you include framework updates in "what's new in C#" then every C# release has been massive.
|
# ¿ Aug 15, 2011 02:36 |
|
It's sorta interesting to note that the stacking of using statements isn't anything special, it's just a statement idiom that's generally frowned upon elsewhere. Just like c/c++ and Java it's possible to have control statements followed by a single statement instead of a block of statements. E.g. code:
code:
code:
code:
code:
code:
Dr Monkeysee fucked around with this message at 21:03 on Sep 11, 2011 |
# ¿ Sep 11, 2011 21:00 |
|
His Divine Shadow posted:Thanks, I tried to use a partial view along with its own code in the controller, that view outputs a list properly when I tested and moved it outside the shared folder and viewed on its own. Putting all this logic in the views (even if it's farmed out to a helper class) isn't very MVC-ish. If you have some functionality that needs to be executed on every action you can subclass ActionFilterAttribute and decorate every action that needs that functionality, or the entire controller if they all do. If you need pluggable functionality (like "render this data feed here") you can call controller actions directly from views using Html.Action(). This spins up a light-weight request and executes your controller action, returns the action result, and renders the response right there in the view. Html.Action() allows you to sort of compose multiple actions together using the view as the glue, though you probably don't want to shove everything through that technique.
|
# ¿ Oct 13, 2011 00:31 |
|
Eggnogium posted:This doesn't actually impact me but out of curiosity, are readonly fields populated before the constructor is called? Like if I needed to do a few lines of processing before having everything set up to instantiate my read only field, would I be able to do that and assign to the field in the constructor? Or does the initialization have to be in-line in the class defintion? Inline intialization is syntactic sugar for constructor code anyway. If you look at the actual IL any in-line field initialization statements are just copied into the top of all constructors defined on the class.
|
# ¿ Oct 29, 2011 05:23 |
|
I can't imagine how else you'd do it. "type" in parent<type> isn't the parent's type, the parent's type is parent<type>. Imagine if you had parent<type1, type2> instead, which one would you expect to get as the parent "type"?
|
# ¿ Nov 16, 2011 00:12 |
|
Mr. Crow posted:Supposed to be being the crucial problem, I can't genericize everything/list itself because I don't actually know what kind of data I'm going to be reading in and can't assume it's valid data, so in the base the list is just a List<ICommon> so it can be validated later. It's for convenience in other area's (like it should have been here). So it's a List<ICommon> (or something much like it) which means everything in it is guaranteed to be an ICommon. But then it sounds like the collection contains heterogeneous implementations of ICommon, of which you need to know the actual type (or parent type) after pulling it out of the collection? Am I getting that right? This sounds like a disaster of a design. Like you know you have an ICommon but in case A you need an ICommon that is also a FooParent but in case B you need an ICommon that is also a BarParent and that information is lost by the collection since everything is typed to ICommon. Basically it's equivalent to using an old school ArrayList and you need to re-discover the type information of each element as it's used?
|
# ¿ Nov 17, 2011 00:19 |
|
Ithaqua posted:Here's the proper way to implement IDisposable, right off of MSDN: This is a little late but I'd like to point out that this pattern on MSDN is the proper way to implement IDisposable if you're wrapping an unmanaged resource (like the IntPtr in the example). If all you're doing is wrapping another managed IDisposable implementing IDisposable yourself is trivial. You just delegate Dispose() to the IDisposable member you wrapped. This is a point MSDN has always oddly underemphasized. If all adaz is doing is storing a DBContext then that's all he needs to do. Dr Monkeysee fucked around with this message at 20:58 on Feb 2, 2012 |
# ¿ Feb 2, 2012 20:56 |
|
Mr. Crow posted:this? It's even simpler. You're still using the Dispose(bool) pattern used for unmanaged resources. The minimal example is shown below. Let's assume we're wrapping a specific IDisposable type like a Stream. code:
I use Stream in this example but it's a safe assumption for any of the IDisposables in the Framework. Third party libraries are more of a toss up depending on who they are I suppose and as I mentioned before if you're working directly with an unmanaged resource (like a raw file handle or something) then you do need to follow the MSDN pattern posted earlier. biznatchio posted:Because if (disposing == false), you're finalizing, not disposing; and there's no need to propogate Dispose() to your fields, since if any of them need finalization, they're going to be sitting right there on the finalization queue alongside you (and in fact they might have already been finalized, since finalization order is non-deterministic with one exception that's not really relevant to this discussion). You're over thinking it as well. If all you're managing is another IDisposable you don't need a finalizer. Why? Because the IDisposable you're delegating to already has a finalizer (a safe assumption for anything in the Framework. You would need to verify this yourself for third party libraries or classes written by coworkers you don't trust). All you've done by implementing a finalizer is gummed up the finalization queue with an object that tells another object also on the finalization queue to clean itself up. You've just created the sort of weird re-entrant condition the finalizer pattern was needed for in the first place! The upshot: you only need the finalizer pattern if your type directly manages an unmanaged resource (or you're working with a borked IDisposable that doesn't manage itself correctly, in which case you're probably doing this whole thing as a workaround). edit: re-reading your post I really just reiterated your point, but what I'm trying to get across is you don't need the Dispose(bool) pattern if you're just wrapping another IDisposable. I suppose you could stub it out if you're writing a base class who's children may conceivably need to manage native resources (e.g. Stream). But if that's not the case no reason to make it more complicated than it is. Dr Monkeysee fucked around with this message at 04:43 on Feb 3, 2012 |
# ¿ Feb 3, 2012 04:25 |
|
Edit: gently caress me I quoted myself
|
# ¿ Feb 3, 2012 04:37 |
|
What's the scenario where you need both?
|
# ¿ Feb 3, 2012 08:12 |
|
adaz posted:e: Actually reading the rest of your posts it seems like The only thing I even need is a single dispose without any finalizing and checking Yep. You only need finalizing for cleaning up unmanaged resources. And doing it for types that don't need finalization actually generates more work for the GC since the finalization queue gets blown out by all these objects just saying "hey clean that guy up over there. yes, that guy, the one you already know about". edit: to elaborate on that a little bit, the reason the Dispose pattern exists is two fold: 1) The Finalizer itself guarantees that unmanaged resources will get disposed by the GC. This makes it a lot harder to accidently leak resources. However it's non-deterministic so it doesn't prevent you from eating up all your handles (or whatever) before the GC gets around to freeing them so it's not a fail-safe. You can still starve yourself. 2) The Finalize/Dispose pattern on MSDN is there to ensure that unmanaged resources don't get freed more than once, which usually causes a catastrophic error (think "delete"ing a pointer twice in C++ or something. It doesn't end well). The problem is .NET offers both deterministic and non-deterministic freeing in the form of IDisposable and the Finalizer, respectively. The GC doesn't know if a finalizable object has had its Dispose() method called on it so any finalizable object gets dumped onto the finalization queue regardless of what happens to it during its lifetime. This is great if the developer forgets to dispose of the object, the finalization queue will take care of it by calling the object's finalizer. But what if the developer *did* dispose the object? Now the finalizer may blow up due to freeing an already freed resource. This may also occur if a parent object is disposing its own member objects which themselves contain freeable resources. They may all be on the finalization queue, but the top-level class is a good citizen and frees all its children, meaning those children will again get pinged more than once by the GC. Hence all that noise with the protected virtual void Dispose(bool) crap to keep track of whether the object has been disposed of yet and in what context. It's to ensure that resource releasing is re-entrant, i.e. it can be called more than once without exploding, because chances are it will be called more than once. That's the reasoning behind it. Freeing unmanaged resources more than once is Generally Bad so the dispose pattern is to ensure that doesn't happen. But of course the whopping caveat to that whole pattern is it only matters for unmanaged resources. Managed resources don't have these problems, so implementing IDisposable to clean up other IDisposables is really really simple. It's annoying that MSDN doesn't make that distinction obvious because you end up with .NET devs following official documentation like they should and creating all this useless boiler plate for every single IDisposable implementation they ever come across. Dr Monkeysee fucked around with this message at 21:17 on Feb 3, 2012 |
# ¿ Feb 3, 2012 20:52 |
|
A slightly more involved alternative if you don't want to pull in JSON.NET is to write a JavaScriptConverter subclass and pass it off to the JavaScriptSerializer which is all JsonResult uses under the hood. You can customize field names via JavaScriptConverter, though if you want to support anonymous types you'd have to do some reflection to discover what the converter is being given at runtime.
|
# ¿ Apr 4, 2012 01:46 |
|
Cat Plus Plus posted:It implements nearly all of .NET 4, with most noticeable missing feature being WPF. http://www.mono-project.com/Compatibility It's missing a bunch of WCF parts as well though that's rapidly becoming a moot point with Web API being the new MS thing. I've also found MonoDevelop/Xamarin Studio to be pretty rough around the edges but there's basically nothing in the IDE universe that compares to VS so that's not particularly surprising. Given your alternative on OS X is Xcode and Objective-C/C/C++ it's certainly not a bad way to go. Their strengths are really in cross-platform and mobile and that's where most of their attention lies. I've found it to be just an ok replacement for the complete Windows .NET toolchain regarding desktop or web services. Dr Monkeysee fucked around with this message at 19:41 on Jun 18, 2013 |
# ¿ Jun 18, 2013 19:37 |
|
Essential posted:Actually, I got to Delegates because I was wondering what in the hell Func was, which if I remember right from the msdn docs, Func does some Delegate stuff in it. Just to close the loop on this, Action and Func are specific definitions of delegate types for common use-cases. They're not a whole new thing that replaces delegates.
|
# ¿ Aug 22, 2013 19:11 |
|
gariig posted:Does anyone have an opinion on CLR via C#? It's a very good overview of what's going on under the hood of the CLR and how C# maps onto that but it's a little out of date at this point (I think the latest edition covers up to .NET 3.5 which was still the 2.0 CLR). However the author has some very specific opinions about threading and at least a couple chapters get overwhelmed by pimping his own thread libraries. I'd be interested to see his take on the TPL and C# 5.0 because he certainly wasn't a fan of MS's solutions at the time. I wouldn't say it's got a whole lot of practical uses for an application developer unless you're doing crazy things with assembly manifests or something but if you like an overview of the CLR nuts & bolts it's fun. Dr Monkeysee fucked around with this message at 20:50 on Aug 30, 2013 |
# ¿ Aug 30, 2013 20:48 |
|
|
# ¿ Apr 29, 2024 08:49 |
|
Rocko Bonaparte posted:What's the convention in VS2012 for using Nuget packages with source control? We're using git and don't want to upload all the binary packages we're using. Is there a way to keep the references to the packages intact so that a new developer can have them acquired without a fuss? Turn on package restore in VS's package manager settings and check in the packages.config that gets added to your project file. .gitignore the packages/ folder.
|
# ¿ Sep 14, 2013 23:00 |