Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«155 »
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


I don't know anything about the costs of robust, enterprise legacy software support (well, I know now what it costs to the end user to attempt to get setup, holy poo poo, but I lack context here). How expensive is it for them to turn "old software and hardware" into "old software and new hardware?" Are massive computers now considerably cheaper than they were back when thin clients first became widespread in business? Or still as expensive? And why?

Sorry for inserting acknowledged ignorance into a pretty thoughtful discussion, but I'm very curious as to where all that money goes and how it's justified beyond "market'll bear it "

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007

Wake up and
smell the murder.



They're expensive because the worldwide market for these things is <1000 customers. IBM has to make back all of their R&D and development on a few hundred sales. If you want the justification on the customer's side, it's that they need to run their code every month/week/night, and it needs to be 100% correct. As the companies (and their databases) get bigger, you need more and more insane hardware to get it done on time.

"I can make your lovely code go 5% faster" opens up a lot of checkbooks.

As for the "Why don't they just rewrite their code" question, I worked for one of those companies. And they tried. Several times. Every time, it cost tens of millions and was a massive disaster.

Edit: Here you go!

https://www.google.com/webhp?source...o+SAP+australia

KillHour fucked around with this message at Oct 21, 2014 around 14:46

PCjr sidecar
Jan 26, 2011

dude, you gotta end it on the rhyme

Gwaihir posted:

The best part (OK maybe not the best because Power systems have a ton of cool poo poo) is that IBM's rack consoles still have the most fantastic keyboards and trackpoint implementations, unlike the recent hosed up Lenovo versions!!!

System Z isn't Power; its a glorious CISC monstrosity: https://share.confex.com/share/122/...nce_Summary.pdf

Gwaihir
Dec 8, 2009


KillHour posted:

They're expensive because the worldwide market for these things is <1000 customers. IBM has to make back all of their R&D and development on a few hundred sales. If you want the justification on the customer's side, it's that they need to run their code every month/week/night, and it needs to be 100% correct. As the companies (and their databases) get bigger, you need more and more insane hardware to get it done on time.

"I can make your lovely code go 5% faster" opens up a lot of checkbooks.

The whole "This code MUST run 100% correctly" part is such a riot to me, since we're in the situation of "Fantastic hardware running utter dogshit programming." Outside of two dark weekend system upgrades where we installed and migrated to totally new systems, we've never had an OS or hardware related downtime event in the last 5 years. Vendor software related? All the fuckin time. We have a much more modest (Well... relative to that first one I linked) system like this one: http://c970058.r58.cf2.rackcdn.com/...0100719_es.pdf.

(Ask) me about vendors that don't put primary keys on their tables!

e:

PCjr sidecar posted:

System Z isn't Power; its a glorious CISC monstrosity: https://share.confex.com/share/122/...nce_Summary.pdf

Yea, its just similar enough to get you in trouble when in use because some things are the same and some things are totally not!

Moey
Oct 22, 2010



KillHour posted:

I've programmed one of these remotely for IBM's Master the Mainframe competition. Even though I couldn't see it, it totally felt like I was in the movie Hackers.

I did that twice while in school. We had a decent mainframe lab that IBM sponsored the poo poo out of. It was apparently rare for a school to still have COBOL/JCL courses.

r0ck0
Sep 12, 2004
r0ck0s p0zt m0d3rn lyf


Removed the period at the end of your link.

KillHour
Oct 28, 2007

Wake up and
smell the murder.



Gwaihir posted:

The whole "This code MUST run 100% correctly" part is such a riot to me, since we're in the situation of "Fantastic hardware running utter dogshit programming." Outside of two dark weekend system upgrades where we installed and migrated to totally new systems, we've never had an OS or hardware related downtime event in the last 5 years. Vendor software related? All the fuckin time. We have a much more modest (Well... relative to that first one I linked) system like this one: http://c970058.r58.cf2.rackcdn.com/...0100719_es.pdf.

(Ask) me about vendors that don't put primary keys on their tables!

e:


Yea, its just similar enough to get you in trouble when in use because some things are the same and some things are totally not!

Most of these huge companies use in house software written for these mainframes 30+ years ago. The problem is when they try to move to the shiny new commercial stuff that runs on x86 and it breaks everything (*cough* SAP *cough*). Say what you will about those in house programs not being pretty and having spaghetti code, but after 30 years of improvements/bugfixes, they're probably some of the most stable reliable terrifying but functional Frankenstein's monsters out there. Of course, nobody wants to touch them now because they're terrified that any change will break things horribly (it will).

There's something to be said about a program whose bugs are all well documented. It's not as good as having no bugs, but it's pretty close.


Moey posted:

I did that twice while in school. We had a decent mainframe lab that IBM sponsored the poo poo out of. It was apparently rare for a school to still have COBOL/JCL courses.



The community college I went to had a mainframe lab. I took a course on the AS400, which was pretty bad-rear end.

KillHour fucked around with this message at Oct 21, 2014 around 15:05

Rastor
Jun 2, 2001



KillHour posted:

Most of these huge companies use in house software written for these mainframes 30+ years ago. The problem is when they try to move to the shiny new commercial stuff that runs on x86 and it breaks everything (*cough* SAP *cough*). Say what you will about those in house programs not being pretty and having spaghetti code, but after 30 years of improvements/bugfixes, they're probably some of the most stable reliable terrifying but functional Frankenstein's monsters out there.
Airline systems are a classic example of this.

Did you know: your airline reservation is a six-digit letters-and-numbers code because back in the day that represented the block of disk on the mainframe that contained the data related to the reservation. There was no such thing as a "database" or "locks", you just read/wrote that block directly.

go3
Dec 20, 2006


I have a client that is a travel agent and oh man the airline reservation systems are basically loving voodoo

Rime
Nov 2, 2011
Probation
Can't post for 3 days!


Just think of how much efficiency could be gained if they ditched their slavish dedication to such outdated methods and moved into the 21st century.

Rastor
Jun 2, 2001



Not as much efficiency as the many billions of dollars it would cost, unfortunately.

sincx
Jul 13, 2012


Rime posted:

Just think of how much efficiency could be gained if they ditched their slavish dedication to such outdated methods and moved into the 21st century.

I think this would only be possible if there was a global 3-day shutdown of all airline traffic (ala 9-11) and all airlines switched to a new system at exactly the same time. Not gonna happen.

phongn
Oct 21, 2006


Rime posted:

Just think of how much efficiency could be gained if they ditched their slavish dedication to such outdated methods and moved into the 21st century.
Virgin America built their reservation system backend from the ground up instead of using "outdated" systems like SABRE or SHARES and it was an epic clusterfuck.

Southwest Airlines also has a non-legacy system and it's taken them forever to handle such simple things as international reservations. The post-9/11 basic security things like "make sure each person has a ticket and that if you check luggage in you're on the plane" was apparently insanely difficult for them to implement (nevermind how long it took them to do the Trusted Traveller program).

New code ain't all that hot, especially for complicated systems. The US Federal Government has repeatedly tried to replace all number of systems with new ones, spent hundreds of millions if not billions, and repeatedly fails. You can't just wave a magic wand and go "well, we'll do it faster, better, cheaper and more agile with modern techniques!"

The_Franz
Aug 8, 2003


phongn posted:

New code ain't all that hot, especially for complicated systems. The US Federal Government has repeatedly tried to replace all number of systems with new ones, spent hundreds of millions if not billions, and repeatedly fails. You can't just wave a magic wand and go "well, we'll do it faster, better, cheaper and more agile with modern techniques!"

Bear in mind that when it comes to US government contracts you are generally dealing with nepotism, cronyism and "the lowest bidder" on top of any technical issues. It's hard enough to write good enterprise software and it's even harder when you are dealing with some bottom of the barrel types who got the job because the owner of the company plays golf with the nephew of a congressman.

MrPablo
Mar 21, 2003


The_Franz posted:

Bear in mind that when it comes to US government contracts you are generally dealing with nepotism, cronyism and "the lowest bidder" on top of any technical issues. It's hard enough to write good enterprise software and it's even harder when you are dealing with some bottom of the barrel types who got the job because the owner of the company plays golf with the nephew of a congressman.

This is only partially true; government contracts are subject to the Federal Acquisition Regulation (FAR). Among other things, FAR is "supposed" to give contractors a fair chance to compete for government contracts and also give the government a chance to get a better price and eliminate contractors who are unable to actually provide the services they are bidding on.

I'm not saying there isn't waste, fraud, or abuse; only that it's slightly more nuanced than the type you're describing above.

chizad
Jul 9, 2001

'Cus we find ourselves in the same old mess
Singin' drunken lullabies

KillHour posted:

As for the "Why don't they just rewrite their code" question, I worked for one of those companies. And they tried. Several times. Every time, it cost tens of millions and was a massive disaster.

Hell, at my previous job (heavy construction equipment and forklift dealer) their ERP system was a mess of COBOL and flat databases that ran under AIX 4.x on an RS/6000. The vendor also had a newer system that was actually backed by an RDBMS and had a native Windows client instead of a terminal emulator. We looked into migrating to it, and just the hardware and software costs for the new system were over a million.

go3
Dec 20, 2006


.gov projects are more likely to be ruined due to unrealistic deadlines and constantly changing requirements due to politics ala healthcare.gov

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


Thanks for the insight, everyone! I know it costs a shitload to go from a potentially good idea to a finished chip (500 mil and up for complicated things, right?). I guess the whole ecosystem surrounding their products puts them in a substantially different class than what I tend to think of as traditional durable goods.

Where could I read up on how IBM's distribution model compares to Intel's? I am loving the discussion but it is getting a little far afield I guess, haha.

BobHoward
Feb 13, 2012

Special Operations Executive
Q Section




go3 posted:

.gov projects are more likely to be ruined due to unrealistic deadlines and constantly changing requirements due to politics ala healthcare.gov

This applies to more than just software.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Anyone remember the article about the FERS paperwork system a while back? THAT is why legacy software and systems cannot work even IF we had contractors that all meant very well and such. It's not exactly that much better in commercial software or anything either, they have to deal with their customers' issues and adapt accordingly or someone else will gladly take the billions to deliver some really lame "solution." It is a minefield of barriers all over the place that most of my peers that have had cushy times just going and hacking together random crap and getting millions in funding and eventually going up the chain after an acquisition for people to realize that it's completely and utterly useless in a BigCorp situation without even MORE money dumped in.

Agreed posted:

I don't know anything about the costs of robust, enterprise legacy software support (well, I know now what it costs to the end user to attempt to get setup, holy poo poo, but I lack context here). How expensive is it for them to turn "old software and hardware" into "old software and new hardware?" Are massive computers now considerably cheaper than they were back when thin clients first became widespread in business? Or still as expensive? And why?
There are x86 virtualization modes available for IBM mainframes, and you can actually run Windows (and have it supported by Microsoft) along with Linux on a mainframe. Mainframe hardware is something that people have been trying to duplicate for decades in the x86 hardware ecosystem in the sense that they're now making this whole "cloud" thing when mainframes have had the stuff that most of these companies have asked for in the past 7 years... since 1980 or before. The architecture truly is not that relevant in the greater context as much as the software that's written to target it, and THAT is what companies dump so much money into now. Nobody will argue for ARM in the cloud as your primary compute until there's a sizable body of code that will run quite well AND get you ROI for all your efforts. Because human time is so much more expensive, it's basically not worth rewriting software to retarget anything anymore and it's probably better use of money to overpay $5M+ / yr on power and x86 "inefficiencies" when you've got $1Bn+ of revenue coming in. Believe me, the costs of all these consultants and contractors to rewrite software completely outstrips the costs of datacenters, and it's only getting worse as the tech labor market continues to grow.

Now, if we had almost everyone that's worked on all these advancements in x86-64 compilers and formal model implementation on equivalent compilers that target z/OS, well then we'd be laughing at Intel's efforts to get efficiency out of x86 now pretty much. But how did most of us actually learn a lot of this stuff? x86 hardware that was affordable for you to own and hack on in your time instead of these time-shares (most of the kids that grew up on C-64 programming are into their 40s by now I think and likely not even programming anymore).

Rastor posted:

Airline systems are a classic example of this.

Did you know: your airline reservation is a six-digit letters-and-numbers code because back in the day that represented the block of disk on the mainframe that contained the data related to the reservation. There was no such thing as a "database" or "locks", you just read/wrote that block directly.
The thing is that most programmers find these kinds of hacks to be absolutely brilliant (in their context) as evidenced by the story of Mel (Google it, it's legendary)... who wrote software that was ingeniously engineered on a technical level but utterly useless and disastrous for a business beyond so many years that nobody else could actually really use it in the end. Most of these programmers admittedly don't give a gently caress about giving their code to mediocre programmers that can't understand it (I sometimes swing to that opinion myself), but that's pretty much been the unflappable reality of engineering management and software development methodologies and all that for a long, long time - not every company can get great programmers even IF they have a good bit of money to toss their way.

phongn
Oct 21, 2006


MrPablo posted:

This is only partially true; government contracts are subject to the Federal Acquisition Regulation (FAR). Among other things, FAR is "supposed" to give contractors a fair chance to compete for government contracts and also give the government a chance to get a better price and eliminate contractors who are unable to actually provide the services they are bidding on.
The FAR is opaque enough that it is very difficult to get contractors who bother to bid on Federal projects. The White House had to rescind whole sections of the FAR to get enough good people (in the end, less than a dozen out of Silicon Valley) to rescue healthcare.gov.

necrobobsledder posted:

The thing is that most programmers find these kinds of hacks to be absolutely brilliant (in their context) as evidenced by the story of Mel (Google it, it's legendary)... who wrote software that was ingeniously engineered on a technical level but utterly useless and disastrous for a business beyond so many years that nobody else could actually really use it in the end. Most of these programmers admittedly don't give a gently caress about giving their code to mediocre programmers that can't understand it (I sometimes swing to that opinion myself), but that's pretty much been the unflappable reality of engineering management and software development methodologies and all that for a long, long time - not every company can get great programmers even IF they have a good bit of money to toss their way.
Probably the best software group I ever read about was the Space Shuttle's software team, which was incredibly expensive per line of code, but produced amazingly high quality code. Even they occasionally screwed up - in one case, they couldn't guarantee their software would work across years (IIRC it did, but they couldn't prove it until well after the mission had completed). Their practice was documentation, testing and specification at every level (and no epic overtime burnout drives).

Incidentally, they were targeting what were essentially avionics mainframes (IBM's System 4/pi, derived from the System/360)

phongn fucked around with this message at Oct 21, 2014 around 19:51

WhyteRyce
Dec 30, 2001

Spirited scintillating Sacramento shooters so-called Stauskas and Stojakovic stay super sexy

You will never find a manager or team who will agree to take responsibility for bringing down a working critical backbone of a business and suffer all the wrath and finger pointing afterwards for any issue that pops up

BobHoward
Feb 13, 2012

Special Operations Executive
Q Section




phongn posted:

Probably the best software group I ever read about was the Space Shuttle's software team, which was incredibly expensive per line of code, but produced amazingly high quality code. Even they occasionally screwed up - in one case, they couldn't guarantee their software would work across years (IIRC it did, but they couldn't prove it until well after the mission had completed). Their practice was documentation, testing and specification at every level (and no epic overtime burnout drives).

As I understand it, the "documentation" wasn't ordinary either -- it was detailed to the level of nearly being pseudocode.

Another interesting detail: The Shuttle flight software was naturally a safety critical system, and everything safety-critical on the Shuttle was engineered using highly detailed fault trees to estimate the probability of loss of mission, loss of vehicle, loss of vehicle and crew, etc. Relying on just one implementation of the software spec was considered too risky -- they had some target defect rate per line of code, and even though it was really low, fault tree analysis suggested the risk of loss of life was too high. So they implemented all the software twice, with independent and semi-firewalled teams, in hopes that if one version had a potentially devastating implementation bug, the other version might not share it.

In flight, both versions were always running simultaneously. The primary version ran on a cluster of three redundant computers, using majority vote to decide on the correct control outputs. The secondary backup software ran on a 2-way redundant set (so, 5 computers in total). Handoff from the 3-way to the 2-way was automatic if the 3-way self-detected severe problems with itself, and could also be forced manually.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


That's incredible!

Government work is just not good enough for government work these days.

Methylethylaldehyde
Oct 23, 2004
The Benefactor

Agreed posted:

That's incredible!

Government work is just not good enough for government work these days.

It's more like "If this thing has any sort of software fuckup, everyone is going to die horribly, publicly, and immediately. We need to take whatever steps needed to make sure that doesn't happen."

Systems control stuff these days is kinda sorta similar. The multiple versions 3-way voting system is fairly popular for safety critical systems on things that fall out of the sky. Same with having incredibly redundant hardware to handle it.


On milspec stuff, you can have hardware with 100% ECC correction on the memory, on each interface bus, inside the processor, and on each instruction. You could have one bitflip in ram, another bitflip on the bus due to interference, an instruction that got corrupted due to a freak magnetic issue, and still get the right answer on the output. All at 95C. Not very fast mind you, but there isn't a lot of cruft in systems that need stuff like that.

Methylethylaldehyde fucked around with this message at Oct 22, 2014 around 02:01

japtor
Oct 28, 2005
WELL ARNT I JUST MR. LA DE FUCKEN DA. oh yea and i suck cocks too


If you want to read up more about it, here's a story from...1996/97:
http://www.fastcompany.com/28121/they-write-right-stuff

WhyteRyce
Dec 30, 2001

Spirited scintillating Sacramento shooters so-called Stauskas and Stojakovic stay super sexy

Reminds me of an old radiation therapy machine I read about in school. Originally it was built with hardware and software safety checks and worked fine. Then they released a new model with the hardware safety checks removed. People were dying horrible deaths before they realized the software checks didn't account for how customers were using it and that none of the customers realized they were tripping the hardware checks on the older models.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?


WhyteRyce posted:

Reminds me of an old radiation therapy machine I read about in school. Originally it was built with hardware and software safety checks and worked fine. Then they released a new model with the hardware safety checks removed. People were dying horrible deaths before they realized the software checks didn't account for how customers were using it and that none of the customers realized they were tripping the hardware checks on the older models.

Ahh...the good old Therac-25

IIRC AECL exluded the software from the fault tree, and only looked at the hardware.

Malderi
Nov 27, 2005
There are three fundamental forces in this universe: matter, energy, and enlighted self-interest.

BobHoward posted:

As I understand it, the "documentation" wasn't ordinary either -- it was detailed to the level of nearly being pseudocode.

Another interesting detail: The Shuttle flight software was naturally a safety critical system, and everything safety-critical on the Shuttle was engineered using highly detailed fault trees to estimate the probability of loss of mission, loss of vehicle, loss of vehicle and crew, etc. Relying on just one implementation of the software spec was considered too risky -- they had some target defect rate per line of code, and even though it was really low, fault tree analysis suggested the risk of loss of life was too high. So they implemented all the software twice, with independent and semi-firewalled teams, in hopes that if one version had a potentially devastating implementation bug, the other version might not share it.

In flight, both versions were always running simultaneously. The primary version ran on a cluster of three redundant computers, using majority vote to decide on the correct control outputs. The secondary backup software ran on a 2-way redundant set (so, 5 computers in total). Handoff from the 3-way to the 2-way was automatic if the 3-way self-detected severe problems with itself, and could also be forced manually.

The redundant set was 4 computers in PASS (Primary Avionics Software System) and the 1 BFS (Backup Flight Software) did not have the capability to run on more than one computer. BFS was also never engaged in flight, but did run some useful displays.

The details of the redundant set synchronization and vote-out procedures were absolutely fascinating. Pretty much a miracle, given that it was all designed in the mid 70's.

Harik
Sep 9, 2001


Combat Pretzel posted:

That's practically impossible, because there's no way to entirely track all pointer references to the DLLs being updated, and on top of that, if active data structures and locations of global static variables mismatch between the active DLL and the one to be switched in (which will be 99.9% the case), all affected apps will crash.

If only there were a CPU feature that allowed different processes to look at the same address and see different contents. Sarcasm aside, it works pretty damned well on Linux. I'm running bleeding-edge on my dev box to catch things before I have to deal with them in production, so I'm eating a libc.so change every upgrade - used by approximately everything at all times.

Linux page cache is tied to an inode (unique file ID) rather than a filename, so when you use the atomic rename operation to replace a file you have both copies in RAM. New execs get the new version, running code keeps the old. The old page cache and now-ghost file are refcounted and when all users exit they're cleaned up. Do all the disk IO, then restart services cleanly.

Active data structures are per-process continue to use the same code they started with. Global statics could mean a few different things - if you mean "static data in the library that the processes use" it's still the same from start to finish. Even if it's in a different location in the new library, a running process sees the old.

I say 'linux' in particular rather than unix, since the ability to handle an upgrade cleanly requires a bit of finesse that's entirely dependent on vendor.

Maybe someone better versed on the NT kernel can explain why it's impossible to replace DLLs on the fly in windows, I thought it's what shadow copy was specifically meant to achieve.

phongn
Oct 21, 2006


Harik posted:

Maybe someone better versed on the NT kernel can explain why it's impossible to replace DLLs on the fly in windows, I thought it's what shadow copy was specifically meant to achieve.
You can, but so much of Windows remains in-use that you have to restart anyways. It doesn't do much good if you can replace on-disk if the OS is running the old code continuously.

There are updates that don't require restarts and the amount slowly increases in each version.

Factory Factory
Mar 19, 2010

I can do sex. It's just alien sex.


Timeframes seem to be getting a bit weird.

According to TR, according to rumors, Broadwell-E has been pushed back to sample in 4Q 2015 for a 2016 launch. Meanwhile, same article, Skylake-S is sampling already. The Google Translation is really shabby, but apparently the parts out there are 2.3 GHz base/2.9 GHz Turbo at 95W, with an estimated launch sometime in 2015 and notebook parts in 4Q15.

ohgodwhat
Aug 6, 2005

Hebrew Hammer

Agreed posted:

That's incredible!

Government work is just not good enough for government work these days.

Good enough for government work used to be a compliment, until the standard political refrain became to call everything the government does incompetent.

Knifegrab
Jul 30, 2014

G = Gadzooks! I'm terrified of this little child who is going to stab me with the knife!


So I have been running HWiNFO64 in the background for several days now just to see where my i7- 4790k is at in terms of temperatures. Most of the time all my temperatures are low, right now my cpu is around 36C, however at some point my cpu got up to 75C, not sure exactly when or for how long.

Is this too high? I am not overclocking it and its got the original cooler on it. I am thinking about ordering a better cooler but am not entirely sure what to get, I have a fair amount of room but I don't know what process is involved with reapplying and removing old coolers/heatsinks.

cisco privilege
Dec 5, 2005

det er noget at leve for

It's a little warm but not particularly dangerous. Installing a new CPU cooler usually requires removing the motherboard from the case. Some cases provide cutouts in the motherboard tray for replacing heatsinks, but they can still be difficult to install with the motherboard connected. For stock speeds or low overclocking something like a 212+ EVO would work, with higher-end options available from Phanteks, Thermalright, Noctua, and others like Xigmatek at various price-points. Some of these coolers allow re-mounting in the future without needing to remove the motherboard after the initial install, although you're unlikely to need to replace the CPU at any point so it's a less important feature than it was in the socket-775 era. Alternatively there are closed-loop liquid coolers available from various companies.

If you're just planning to keep the CPU at stock then the default cooler is probably fine. If you think you'll overclock it eventually or you just want it running quietly then you'll want to look into a better cooler, knowing that at some point it will involve removing the board.

cisco privilege fucked around with this message at Oct 23, 2014 around 04:17

Knifegrab
Jul 30, 2014

G = Gadzooks! I'm terrified of this little child who is going to stab me with the knife!


cisco privilege posted:

It's a little warm but not particularly dangerous. Installing a new CPU cooler usually requires removing the motherboard from the case. Some cases provide cutouts in the motherboard tray for replacing heatsinks, but they can still be difficult to install with the motherboard connected. For stock speeds or low overclocking something like a 212+ EVO would work, with higher-end options available from Phanteks, Thermalright, Noctua, and others like Xigmatek at various price-points. Some of these coolers allow re-mounting in the future without needing to remove the motherboard after the initial install, although you're unlikely to need to replace the CPU at any point so it's a less important feature than it was in the socket-775 era. Alternatively there are closed-loop liquid coolers available from various companies.

If you're just planning to keep the CPU at stock then the default cooler is probably fine. If you think you'll overclock it eventually or you just want it running quietly then you'll want to look into a better cooler, knowing that at some point it will involve removing the board.

I have a carbide air 540 so I believe i have more than enough room to leave the board in unless I am misunderstanding something here. Also as long as that temperature is fine, what is the danger zone? 80C? 90C? Would getting a better cooler at least still lower my max temps to the sub 70C ranges?

Adbot
ADBOT LOVES YOU

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!


Knifegrab posted:

I have a carbide air 540 so I believe i have more than enough room to leave the board in unless I am misunderstanding something here. Also as long as that temperature is fine, what is the danger zone? 80C? 90C? Would getting a better cooler at least still lower my max temps to the sub 70C ranges?

The CPU will throttle itself at 99 or 100C down to like 800mhz until you stop trying to kill it. I forget the thermal limit but it might be 105C or something on the core? I wouldn't let it sit over 80C for long periods of time but 75 max is not a huge deal and that 80 is more of a personal preference thing (since the temperature monitoring isn't exactly what's on the core it could be warmer inside and I'd guess I have some bad cooling and it might get dusty and kick things up a few degrees or whatever). It's a little warm, sure, but not terrible.

If you're concerned and want lower temps the Hyper 212 EVO is like 30 bucks and goes on sale down to 25 regularly, it's just bulky and can be annoying to install if the motherboard is in the case and there's no cutout behind the cpu socket. I have an i5-4670K overclocked and the max temps I get are 68C or so with regular loads. Benchmarking and stability tools like Intel Burn Test and Prine95 can get it hotter but they're intended to do that (also don't run them if you're using adaptive voltage).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«155 »