Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
japtor
Oct 28, 2005

Anime Schoolgirl posted:

We also may have low profile cards worth a drat
Yeah that's another reason I'm waiting. The Fury Nano would be tempting, but I'm cheap and patient.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

Wonder if there will be some irritating, minor ACPI / EFI related bugs/glitches when I toss Windows 10 onto that machine.
EFI seems to work just fine on my P8P67 EVO. Suspension gets stuck every 4th or 5th time, so you need to actually long-press power off your machine. IIRC, when Hyper-V is installed, you're hosed, because it does suspend-to-RAM only, if it's not installed, it dumps the memory to disk before suspending.

Botnit
Jun 12, 2015

So I've never actually been around / paying attention to a new CPU/socket release, how does it normally go down? Like say at the Gamescon thing in Germany is the 8th, do they announce they're now on sale and suddenly Amazon/Newegg are selling them immediately, or do they slowly trickle start to trickle out from that date? And it's only i7 processors at first, right? Is it the normal i7's or only the Extremes?

Botnit fucked around with this message at 16:17 on Jul 25, 2015

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Botnit posted:

So I've never actually been around / paying attention to a new CPU/socket release, how does it normally go down? Like say at the Gamescon thing in Germany is the 8th, do they announce they're now on sale and suddenly Amazon/Newegg are selling them immediately, or do they slowly trickle start to trickle out from that date? And it's only i7 processors at first, right? Is it the normal i7's or only the Extremes?

For Skylake, only the highest end i5 and i7 are supposed to be released first and some time in August. We know that they are being announced on the 8th, but general availability could be any time in the next 30 days. I am assuming that the z170 motherboards will be available shortly before.

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!
I know I brought this up earlier but what's the deal with the "t" processors again? The idea of a i7 CPU with a TDP of 35 Watts Intrigues me, but I wonder about the actual advantages and disadvantages of, say the i7-4765T. Somebody said it's basically a Celeron with more cores since it does not clock higher on its own, would that be correct?

\/ Thanks to both of you. That cures me of wanting one of those things.

lllllllllllllllllll fucked around with this message at 20:58 on Jul 27, 2015

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


No, and whoever told you that shouldn't be giving computer advice.

Celerons traditionally had higher-order instruction sets and other performance characteristics disabled (TW: Wikipedia). Actually some Celerons are pretty much Atoms at this point.

Pentiums are less cut down than Celerons.

The letter (if any) after a given part defines what its priorities are. T processors, for example, are designed to use very little power for their performance level, but this is mostly useful for when getting rid of heat is actually an issue. Think very high altitude, not servers (you should probably be using Xeons in servers and custom A/C jobs in server rooms). Intel ARK will let you compare the different letter versions of a given processor number - this is quite handy.

On the other hand, for Intel processors power draw and thermal discharge are mostly a concern for high/heavy use circumstances - even if it's constantly doing low-level stuff (Internet) it won't be doing much more than what it would be idling (read: probably nowhere near TDP), and when actual serious work or play is to be done you'd probably rather the performance. Stick with baseline or unlocked processors.

dont be mean to me fucked around with this message at 13:55 on Jul 26, 2015

Anime Schoolgirl
Nov 28, 2002

T chips are great for the performance they give you and you can use them in silly mini-itx custom shells or slim and low profile cases. With an average cooler and not using the GPU, they run at 4c turbo basically all of the time, making the difference between them and a normal full-voltage CPU minimal.

The driving reason for this is that the wiring in my house blows and my room's wall outlet throws a shitfit at a higher than 550w power draw. But they're OEM chips and you have to pretty much be stuck with 1month+ OEM delays or comical 50% markups. Amazon and Newegg very rarely stock chips like these.

If you're like most builders, you tend to build stuff wider than 5 inches and have actual working wiring, and thus don't really need the power savings (you save like 75 cents a month between a K processor at stock clocks and T processor in most use cases, so that's not even really a consideration.) If you just want flexibility for coolers, I suggest using the S series as they run significantly cooler and lose just .1 or .2ghz, as well as the obvious advantage of being widely available.

Also, for T chips, the *65 and *85 T chips loose too much performance for going just 10 watts lower. They don't even clock anywhere near the standard consumer models with 4 core turbo.

Anime Schoolgirl fucked around with this message at 14:51 on Jul 26, 2015

NihilismNow
Aug 31, 2003
And i thought just a 10A 230 volt circuit was bad. WTF kind of wiring can't deal with a 600watt draw? How do you even use a vacuum cleaner, washing machine or refrigerator?
Thank you for your info but i'd have someone take a look at that wiring before buying a special low power CPU.

Nintendo Kid
Aug 4, 2011

by Smythe

NihilismNow posted:

And i thought just a 10A 230 volt circuit was bad. WTF kind of wiring can't deal with a 600watt draw? How do you even use a vacuum cleaner, washing machine or refrigerator?
Thank you for your info but i'd have someone take a look at that wiring before buying a special low power CPU.

Probably an older house or apartment, and kitchen/heavy appliance circuits tend to get upgraded first. Also a lot of people use battery vacuums that'll run for an hour or so and then you'll just leave it to charge overnight.

HERAK
Dec 1, 2004

NihilismNow posted:

And i thought just a 10A 230 volt circuit was bad. WTF kind of wiring can't deal with a 600watt draw? How do you even use a vacuum cleaner, washing machine or refrigerator?
Thank you for your info but i'd have someone take a look at that wiring before buying a special low power CPU.

I still find it hilarious that most of America hasn't succumbed to some sort of massive electrical fire and that you are willing to accept such comparatively low standards for your household wiring.

Back on topic. Have intel mad any mention of using HBM on future cpus?

Anime Schoolgirl
Nov 28, 2002

NihilismNow posted:

Thank you for your info but i'd have someone take a look at that wiring before buying a special low power CPU.
We have, it's kind of a hard problem to solve because the switcher box doesn't really exist beyond the breaker (this is a problem with most houses in this neighborhood :psyboom:). The patch we had is that we set a really sensitive breaker that trips load to the problem wiring (which is just my room, hilariously) otherwise we'd have to dump five figures just to unfuck the wiring to the house and replace it with an electrical wiring setup from at least last decade, something we won't really have the means and resources for until at least 3 years.

HERAK posted:

Back on topic. Have intel mad any mention of using HBM on future cpus?
Nope but they're putting small amounts of DRAM on die just for the GPU (Skylake generation Iris graphics will use 256mb.)

Potato Salad
Oct 23, 2014

nobody cares


Thanks for the link, Unimaginative -- that's really helpful.

HERAK posted:

I still find it hilarious that most of America hasn't succumbed to some sort of massive electrical fire and that you are willing to accept such comparatively low standards for your household wiring.

Same with our ISPs :( If you haven't been following what the majordomo of Comcast has been doing in his role as the chairman of the FCC (internet / broadcast control agency), he's basically been railing on mobile data providers (Comcast has no stake in that market) about data caps & throttling, distracting us from the depressingly-rapid spread of data caps and throttling in landline providers.

Guy need to be killed, quickly.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Catching up, resurrecting LGA chat because someone was wrong on the something awful dot com forums

GokieKS posted:

LGA 775 started with Prescott and Smithfield, which had quite possibly the worst power-efficiency of any Intel CPU in history. I think people are kidding themselves if they think the primary reason was anything but to shift the issue of damaged pin damage to MBs.

No. After the capital costs of switching are amortized, LGA sockets probably cost less than ZIF PGA sockets and I would be surprised if either party was all that worried about warranty repair costs for damaged pins.

Methylethylaldehyde posted:

Much lower inductance, needed to drive a good signal at lower voltages.

Yes. Better power delivery, better for high speed signals. Also, higher pin density (more contacts per unit area).

If wikipedia is to be believed, LGA sockets were in use in the 1990s. I would guess that Intel simply didn't start using them until pushed to by technical requirements.

PC LOAD LETTER posted:

That was the reason given but AMD has been able to do pretty well with 'regular' sockets for quite a long time now so I'm not sure how necessary it really was to do it.

AMD actually switched to LGA for server parts many, many years ago. Same reasons as Intel.

Not switching to LGA on the desktop is likely to be a function of "not enough budget for those product lines". The horrible irony is that they probably need to, they'd end up being more competitive and reducing their per-unit costs, but AMD hasn't been able to spend much money on desktop platform infrastructure for a long time.

(note: they switched to LGA on server parts back when they were actually still making some money on server CPUs, those days are long gone)

Panty Saluter
Jan 17, 2004

Making learning fun!

HERAK posted:

I still find it hilarious that most of America hasn't succumbed to some sort of massive electrical fire and that you are willing to accept such comparatively low standards for your household wiring.

Back on topic. Have intel mad any mention of using HBM on future cpus?

The regulations are fine, you should see what gets grandfathered in or just plain ignored. :supaburn:

Nintendo Kid
Aug 4, 2011

by Smythe
There's still some housing out there that contains substantial amounts of this wiring:
https://en.wikipedia.org/wiki/Knob-and-tube_wiring

Rastor
Jun 2, 2001

Anime Schoolgirl posted:

Nope but they're putting small amounts of DRAM on die just for the GPU (Skylake generation Iris graphics will use 256mb.)
The rumor I heard was that Skylake will get 128MB and it's Kaby Lake that will get 2x128MB modules. And even then Intel seems reluctant to make those chips available in socketable form.

japtor
Oct 28, 2005

Anime Schoolgirl posted:

Nope but they're putting small amounts of DRAM on die just for the GPU (Skylake generation Iris graphics will use 256mb.)
It also acts as a big rear end L4 cache iirc. Not sure how much it matters in practice but I vaguely recall some of the older Iris Pro mobile parts getting big boosts on random benchmarks cause of it.

And they don't have HBM plans, but they have their own stacked DRAM (and other?) thing in HMC, hybrid memory cube, although it might have some other name now (or was another name before). I think the first implementation is going to be on one of the upcoming Xeon Phi chips.

Nintendo Kid posted:

There's still some housing out there that contains substantial amounts of this wiring:
https://en.wikipedia.org/wiki/Knob-and-tube_wiring
:stare:

Rastor posted:

The rumor I heard was that Skylake will get 128MB and it's Kaby Lake that will get 2x128MB modules. And even then Intel seems reluctant to make those chips available in socketable form.
I thought it seemed like they were going to push them more actually :confused:

Nintendo Kid
Aug 4, 2011

by Smythe

To be fair, it's been illegal to build new buildings with it since I think the second world war, unless you're given special permission to do it for historical purposes, and typically any major maintenance on the electrical system on a house with one requires you to get rid of it. And you could probably find a few places around Europe that have it installed, but then by the time most places got electricity there more modern wiring styles were already cheaper.

Panty Saluter
Jan 17, 2004

Making learning fun!

Nintendo Kid posted:

There's still some housing out there that contains substantial amounts of this wiring:
https://en.wikipedia.org/wiki/Knob-and-tube_wiring

"We don't need no ground path, let the mothafucka burn"

Potato Salad
Oct 23, 2014

nobody cares


Have we heard anything about low-power versions of Skylake?

Nintendo Kid posted:

There's still some housing out there that contains substantial amounts of this wiring:
https://en.wikipedia.org/wiki/Knob-and-tube_wiring


That frayed cable.


That splice!

I think I grew up in a house with this poo poo, if my memory isn't making up what the crawlspace below the ground floor looked like. My mother insisted on doing our own electrical work :hb:

NeuralSpark
Apr 16, 2004

Nintendo Kid posted:

There's still some housing out there that contains substantial amounts of this wiring:
https://en.wikipedia.org/wiki/Knob-and-tube_wiring

I live in a house in SF that was built in 1899 and we have this for some of our electrical. First time I encountered some I noped right the gently caress out and called an electrician. The cloth insulator was so old it crumbled at the slightest touch. Thankfully that's only left in the spaces where it can't be reached, such as in the ceiling. The rest of the house was retrofitted (terribly) with standard 3 conductor in conduit.

NeuralSpark fucked around with this message at 02:46 on Jul 27, 2015

Winifred Madgers
Feb 12, 2002

A good part of my house is that way yet. Those pics on Wikipedia could have been taken in my own basement. Old farmhouse built in 1929. The entire second floor has five outlets in six rooms - including the bathroom which has been retrofitted with two GFI circuits. The three bedrooms have one (ungrounded) outlet each, it's very inconvenient. The two rooms in the front of the house have nothing at all yet, except ceiling lights.

What's stopping me from getting going on that is we also have to save up to reverse the air ducts (and add returns upstairs) because it's still set up for a gravity-fed wood burner even though it has a new propane furnace.

We need to do that sooner or later, and we want to take out a wall to open up the place, but that one has the thermostat on it, and there's nowhere else to put it except an interior wall, so we need proper air flow for that to be reasonable. So we pretty much have to do it all at once to do it right, which is the problem in a lot of these cases.

SpelledBackwards
Jan 7, 2001

I found this image on the Internet, perhaps you've heard of it? It's been around for a while I hear.

A structural fire due to a wiring fault will also open up the walls quite nicely.

pienipple
Mar 20, 2009

That's wrong!
My dad, who is a competent electrician, replaced what was left of the old knob and tube wiring in our house during a major renovation. Got rid of the crumbling uninsulated lathe and plaster walls in all but one room too.

It was a farmhouse from 18 something with a conglomeration of amateur extensions and a red brick foundation.

AllanGordon
Jan 26, 2010

by Shine
Thank you for telling me.

PC LOAD LETTER
May 23, 2005
WTF?!

BobHoward posted:

AMD actually switched to LGA for server parts many, many years ago.
Yea but we weren't really talking about server CPU's per se and while it could be due to budget issues there is nothing I know of anyone can point to say definitively one way or another why they're still sticking with pinned sockets even for AM4 much less now. The publicly stated reasons for going LGA on desktop don't really seem to have panned out. Its not like AMD has any issues running OC'd RAM and high clocked buses over a pinned socket vs a LGA and their heat issues are all due to core design/process not the bus voltage. Where are those big gains from going LGA hiding?

Durinia
Sep 26, 2014

The Mad Computer Scientist

PC LOAD LETTER posted:

Yea but we weren't really talking about server CPU's per se and while it could be due to budget issues there is nothing I know of anyone can point to say definitively one way or another why they're still sticking with pinned sockets even for AM4 much less now. The publicly stated reasons for going LGA on desktop don't really seem to have panned out. Its not like AMD has any issues running OC'd RAM and high clocked buses over a pinned socket vs a LGA and their heat issues are all due to core design/process not the bus voltage. Where are those big gains from going LGA hiding?

The "gains" from LGA is more that they can use the same package for both sockets and soldered down parts - a cost savings from a development/supply standpoint, whereas PGA requires the socket.

Why AMD hasn't switched? :iiam: Probably backwards compatibility. Maybe when they move to DDR4 sockets?

(Assuming they're still in business...)

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Durinia posted:

The "gains" from LGA is more that they can use the same package for both sockets and soldered down parts - a cost savings from a development/supply standpoint, whereas PGA requires the socket.

Why AMD hasn't switched? :iiam: Probably backwards compatibility. Maybe when they move to DDR4 sockets?

(Assuming they're still in business...)

How many soldered parts does AMD actually sell? They're so far behind on power consumption that buying AMD mobile anything is an even bigger mistake than desktop or server parts.

Anime Schoolgirl
Nov 28, 2002

Intel still uses reverse PGA for standard-sized laptop mobile sockets mostly because large amounts of IO isn't necessary on laptops and end users actually want post-purchase upgrade options.

The real benefit to going LGA is getting a better pin count and thus more I/O out of the same space for sockets and using the same CPU package for solders+sockets as stated before, as you can only make PGA pins so small that they will bend upon landing. The voltage/inductance part is largely bunk as process nodes improve. AMD could go the comedy route and make PCBs as big as LGA2011 for 1100-1200 pins, though.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

PC LOAD LETTER posted:

Yea but we weren't really talking about server CPU's per se and while it could be due to budget issues there is nothing I know of anyone can point to say definitively one way or another why they're still sticking with pinned sockets even for AM4 much less now. The publicly stated reasons for going LGA on desktop don't really seem to have panned out. Its not like AMD has any issues running OC'd RAM and high clocked buses over a pinned socket vs a LGA and their heat issues are all due to core design/process not the bus voltage. Where are those big gains from going LGA hiding?

Nobody said the gains were ginormous. And while one can't say for certain what's going on without inside information, it's been rather obvious for many years that AMD can't afford to do everything it needs to do. What goes on the chopping block first? Minor features. Ones where it isn't immediate death-of-the-company type stuff if you don't have them. LGA is in that category.

For example, consider power delivery. It's not about whether you can deliver 150W at all. There's plenty of cross sectional area in those PGA pins, they can carry a lot of amps without melting. The real problem is that higher inductance pins cause greater transient response -- meaning that when your high performance CPU core has current demand spikes (they always do), it induces an unpleasantly large momentary voltage droop at the core. In other words the package causes problems with voltage regulation at the point of load, not reduced power.

This is a solvable problem: specify a slightly higher core voltage, far enough above the true minimum that the transients never endanger data integrity. But now your CPU is using more power, and you have to bring it back down under the TDP target somehow, and welp maybe it's time to cut frequency a bit...

There's dozens (maybe even hundreds) of minor things like this where, if taken alone, it's not a huge advantage for Intel, but the fact that Intel is able to do them all adds up to a substantial advantage.

BobHoward fucked around with this message at 07:57 on Jul 28, 2015

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

BobHoward posted:

Nobody said the gains were ginormous. And while one can't say for certain what's going on without inside information, it's been rather obvious for many years that AMD can't afford to do everything it needs to do. What goes on the chopping block first? Minor features. Ones where it isn't immediate death-of-the-company type stuff if you don't have them. LGA is in that category.

For example, consider power delivery. It's not about whether you can deliver 150W at all. There's plenty of cross sectional area in those PGA pins, they can carry a lot of amps without melting. The real problem is that higher inductance pins cause greater transient response -- meaning that when your high performance CPU core has current demand spikes (they always do), it induces an unpleasantly large momentary voltage droop at the core. In other words the package causes problems with voltage regulation at the point of load, not reduced power.

This is a solvable problem: specify a slightly higher core voltage, far enough above the true minimum that the transients never endanger data integrity. But now your CPU is using more power, and you have to bring it back down under the TDP target somehow, and welp maybe it's time to cut frequency a bit...

There's dozens (maybe even hundreds) of minor things like this where, if taken alone, it's not a huge advantage for Intel, but the fact that Intel is able to do them all adds up to a substantial advantage.

Death by a thousand feature cuts?

Ak Gara
Jul 29, 2005

That's just the way he rolls.
There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.)

I've read that sometimes adding a second radiator to your loop only drops the temps by a few degrees due to already being at the thermal capacity limit of the water block itself. Is that correct?

VelociBacon
Dec 8, 2009

Ak Gara posted:

There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.)

I've read that sometimes adding a second radiator to your loop only drops the temps by a few degrees due to already being at the thermal capacity limit of the water block itself. Is that correct?

In some situations yes but with two video cards and a CPU in the loop you really want two radiators AFAIK.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Ak Gara posted:

There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.)

I've read that sometimes adding a second radiator to your loop only drops the temps by a few degrees due to already being at the thermal capacity limit of the water block itself. Is that correct?

Yes. Unless the radiator is actually warm/hot to the touch and its fans are running all the time, it's not the bottleneck.

EIDE Van Hagar
Dec 8, 2000

Beep Boop

HERAK posted:

I still find it hilarious that most of America hasn't succumbed to some sort of massive electrical fire and that you are willing to accept such comparatively low standards for your household wiring.

Back on topic. Have intel mad any mention of using HBM on future cpus?

No but they announced this today that can basically eliminate dram in some applications:

http://www.wired.com/2015/07/3d-xpoint/

SpelledBackwards
Jan 7, 2001

I found this image on the Internet, perhaps you've heard of it? It's been around for a while I hear.

Edit: well gently caress, dunno how I didn't see this was already posted yesterday in the thread.

What do you guys make of this, and do you think it has the potential to replace both RAM and solid state storage at the same time?

Intel, Micron debut 3D XPoint storage technology that's 1,000 times faster than current SSDs

quote:

3D XPoint technology is now in production and will be sampled later this year with select customers. Intel and Micron are developing individual products based on the technology that are forecast to be available sometime next year.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Ak Gara posted:

There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.)

I've read that sometimes adding a second radiator to your loop only drops the temps by a few degrees due to already being at the thermal capacity limit of the water block itself. Is that correct?

What happens if you lower the fan speed? Does it overheat?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

SpelledBackwards posted:

Edit: well gently caress, dunno how I didn't see this was already posted yesterday in the thread.

What do you guys make of this, and do you think it has the potential to replace both RAM and solid state storage at the same time?

Intel, Micron debut 3D XPoint storage technology that's 1,000 times faster than current SSDs

I didn't really read too much into it, but suddenly I'm whisked back a few years to the time of the announcement that memristors would be replacing our storage and RAM by.. 2013.

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


HalloKitty posted:

I didn't really read too much into it, but suddenly I'm whisked back a few years to the time of the announcement that memristors would be replacing our storage and RAM by.. 2013.

:agreed:

And even if it DOES pan out, it'll be a few years before it hits stuff you care about, and who knows if the general public will even be able to get general-purpose computers without extortionate price tags and onerous regulations, the way the mobile revolution or whatever is going.

Just get the SSD now.

Adbot
ADBOT LOVES YOU

Gwaihir
Dec 8, 2009
Hair Elf

HalloKitty posted:

I didn't really read too much into it, but suddenly I'm whisked back a few years to the time of the announcement that memristors would be replacing our storage and RAM by.. 2013.

On the one hand, yea, but on the other hand, this is Intel announcing actual commercial scale production at a fab capable of cranking out 20k wafers per month. That's a lot more real than the previous announcements about ReRAM where its something like "hey guys we created a one off prototype cell that worked!"

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply