Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lDDQD
Apr 16, 2006

Anime Schoolgirl posted:

What you get for desktop chips these days are actually incredibly low quality high-leakage parts and it's often a miracle they work given how lovely they are (your K chips are actually hilariously inferior to every "desktop replacement" laptop chip). The fact that people pay premiums on them is Intel laughing to the bank.

Binning for low leakage works well for mobile and server - but not for extreme overclocking. Leaky dies will clock much better with more voltage. You actually want the worst die possible to have a crack at a world record.

Adbot
ADBOT LOVES YOU

lDDQD
Apr 16, 2006
Why do people want these [5775c] for desktop, again? The integrated graphics (which you aren't going to use) takes up like half the die. Surely, you'd be better off with with that die area used to give it... I dunno, like 20 megs more cache or 4 extra cores or something?

lDDQD
Apr 16, 2006
The eDRAM? Seems to have not a whole lot of impact on overall memory performance in a desktop context. I think it's more of a power-saving strategy for mobile than anything else. Plus, it's kinda needed for the large iGPU.

Overall it just doesn't seem like a very good chip for a gaming rig. Same core count as we've had for 6 years, less actual (SRAM) cache, poor overclocking. It's expensive to boot.
I guess people think the eDRAM is going to make some miracles happen :shrug:.

lDDQD
Apr 16, 2006
I'm not seeing any big wins for it, aside from project: cars in that review. Other reivews paint an even bleaker picture.

lDDQD
Apr 16, 2006
So, because of this stupid thread I'm now a proud owner of a 5775C. Now, can anyone sell me their z97 board please?

lDDQD
Apr 16, 2006
at this point, going to ram costs >100 cycles, easily. probably 100s of cycles, even.
CPUs got faster and memory got higher-bandwidth, but the latency (in seconds) pretty much stayed the same for years.

lDDQD
Apr 16, 2006
That seems a little draconian, don't you think? I would guess that they would merely neglect to add support for any new architectural features that may appear in any future CPU designs to older versions of Windows. You'd still be able to run win7 on a core i7 9700K, but it may not be able to take advantage of all the new features. Also, backwards-compatibility is kinda x86's thing. It's still got support for archaic 16-bit instructions, so if your 1970s-vintage 8088 dies of old age, you can replace it with a Skylake, and continue using DOS 1.0.

lDDQD
Apr 16, 2006
Said nobody, ever.

lDDQD
Apr 16, 2006
What voltage?

lDDQD
Apr 16, 2006
Hell, I don't even have that many chrome tabs - maybe 15-20? And it still manages to take up like 5 gigs of ram. Gmail tabs, for example, are always leaking. Given enough time, they'll use up 1 gb on their own. SA tab tends to be leaky as hell also. It seems that animated gifs are loaded over and over again, and never freed.

lDDQD
Apr 16, 2006

EdEddnEddy posted:

Can someone explain to me what this Turbo Boost Extreme 3.0 tech might be

Essentially, you get frequency control per-core, rather than for all the cores (voltage is still the same for all cores). You can then proceed to identify your slow and fast cores, and clock each one accordingly.
Finally, you've got a software tool, that allows you to target some processes to these faster cores (and some, to slower).

lDDQD
Apr 16, 2006
Broadwell-C is even more expensive than Skylake, though. And a really poor overclocker (though, I suspect mostly because it was taped out for low power, rather than high performance, being a mobile part and all). As we've seen with other Broadwell designs, the arch is perfectly capable of sky-high clocks (source: that Xeon that turbos to 5.1GHz out of the box).
Still, if you've got cache miss-related troubs, it will give you a huge performance boost.

lDDQD
Apr 16, 2006
Oh, you're right. I wish I'd remembered that I read about it on techpowerup (:jerkbag:) in the first place.

lDDQD
Apr 16, 2006
~1.4v seemed like it was unreasonable for my 1st-gen i7 (65nm?). But then, I ended up running my i7-875K at 1.44-ish 24/7, in order to achieve 4.2GHz anyway. And it was fine, for years and years... in fact, it can still do that voltage/clock combination just fine today. A buddy of mine tried a similar thing: he had a Xeon version of the i7-920 (don't remember the exact Xeon model#, but it was literally a server-branded 920), and he ran it at 4.4GHz @ ~1.42v. Initially, it was fine, but it started to degrade fast. After a couple months, it was no longer stable at 4.4. So he dropped it down to 4.3, but after a while it got wonky at that clock also. Eventually, it couldn't even do 4.2, so he gave up and set it to a more conservative 4.0GHz @ 1.2?v. Meanwhile, mine kept on trucking at 4.2GHz @ 1.44v.

The difference? My buddy had a heatpipe tower cooler, while I had a custom loop, which was keeping my 875K as cool as a cucumber: it didn't really go above 55°C on a typical load. His was hitting low 80s.

Now that I replaced my Lynnfield with a Broadwell-C, I actually still find myself running my i7-5775C at about 1.4v. It actually probably doesn't even need that voltage to do 4.2GHz (which is about the most it will ever do; you could give it 1.5v if you wanted to, and it wouldn't really make a difference; these things are terrible overclockers). Anyway, it's been fine so far, although I've only had it for about 4 months.

lDDQD
Apr 16, 2006

PBCrunch posted:

Some games apparently live the higher memory bandwidth afforded by DDR4, which your can't use with a 4700K. Intel 6xx0 gets you dual channel DDR4, and LGA 2011-3 gets up to quad channel DDR4 omg bandwidth.

I don't think the games in question really care about the difference between dual and quad channel DDR4.

If you feel like paying even more money than a 6700K is worth, the 5775C also improves memory performance, but in a totally different way: it just has a giant L4 cache. This allows you to re-use your DDR3 and Z97 board, I guess?

lDDQD
Apr 16, 2006
Timmy.

lDDQD
Apr 16, 2006

Boiled Water posted:

DDR4 seems to me as the real draw of skylake. Otherwise there's not much difference.

Not sure why people like it so much, it's pretty much the same thing as DDR3, but at lower voltage. Latency hasn't gone down a lick; and dual-channel DDR3 has had more than enough bandwidth to keep quad core CPUs busy - albeit after quite long delay, which hasn't really gotten any better.
If they made a switch away from capacitor DRAM, then that would be something to really get excited about. Upgrading from DDRn to DDRn+1 is just :geno:.

lDDQD
Apr 16, 2006
I suspect it's because it's a lot cheaper to support them this way. Something goes wrong? Just restart the VM with a clean image. So no, they probably wouldn't be super-thrilled if people started bringing in their core 2 quads.

lDDQD
Apr 16, 2006

silence_kit posted:

I suspect that his claim that Intel can include the same functionality in half the die area and thus they are able to halve the cost per function every process node is a little misleading, though. I am not a VLSI designer, so I may be all wet here, but still, I thought that there were a lot of functions on a computer chip which do not benefit much from scaling, like longer-distance communication across the chip & storage (SRAM). And there's the issue that because of the heat dissipation constraint, you are obligated to design your circuits with a lower activity factor if you want a higher density at the same speed. I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

A smaller transistor is basically all-around better. It has lower parasitic capacitance, and thus can switch on and off faster. You get lower threshold voltage, which in turn, allows you to have smaller current flowing through the transistor that is in the 'on' state; consuming less power. You do run into some problems once these things get small enough. For eample, nobody worried about sub-threshold leakage until we hit around 100-ish nanometers. Then, it started becoming a problem - worse, as the devices got smaller. The leakage current was really bad mostly due to the sub-optimal geometry of the classical planar MOSFET. It was easy to manufacture, but it's not a very good shape. For a while, though, it didn't matter. So, they came up with non-planar designs: the finFET attempts to solve these geometry problems by having the channel stick out of the chip vertically; greatly reducing the area that is in contact with the substrate. More improvements are to come on this front, in the shape of the gate-all-aroundFET: a natural improvement over the finFET. The optimum shape for a MOSFET would probably be a sphere, by the way, but it would be a nightmare to manufacture.

There are all sorts of problems cropping up with wires on the chips now, too - so you're right about that. They're actually experimenting with using silver wires just to get higher conductivity, even though dealing with silver is a colossal pain in the rear end; you need diffusion barriers so it doesn't start migrating into the silicon. There are also problems with high density memory, but overall, you can definitely fit more SRAM into the same amount of die area, as the transistors become smaller. These problems weren't really a huge concern until very recently, though. DRAM has got it way worse, too.... at some point it might actually turn out to be the case that you can continue making smaller and faster SRAM, but DRAM hits a brick wall, where you can't feasibly make it any faster or smaller. Hopefully by then, we'll ditch it entirely, it was always a source of endless annoyance anyway. It's not even getting significantly quicker; the latency maybe got cut in half since DDR1.

lDDQD
Apr 16, 2006

GRINDCORE MEGGIDO posted:

I posted in the GPU thread, but how much hbm are they fitting? Wonder if it's accessible as CPU cache.

It would be quite useless as a CPU cache - it wouldn't even make very good CPU memory. CPU is all about low latency, low bandwidth; it wants to grab very small amounts of data from memory, and it wants it quickly. GPU is totally the other way around; it wants to grab very large amounts of data from memory, and it doesn't particularly care that it will take a long time for that to be fetched. Since HBM was designed with GPUs in mind, it does the latter fairly well. Which, unfortunately, makes it completely garbage, as far as a CPU is a concerned.

lDDQD
Apr 16, 2006

Don Lapre posted:

There are 3d printed ones on eBay for $12 that just require a vice.

Edit the one I used

https://m.ebay.com/itm/FREE-S-H-CPU...wYAAOSwKoRZYBb3

Are there cad files so you can 3d print one yourself?

lDDQD
Apr 16, 2006

fishmech posted:

What's the reason people look for any USB 2.0 ports at all these days? Really doesn't seem you should be building a new computer for an OS that can't handle 3.0, and a most of the stuff that doesn't like a 3.0 port works finne through an older 1.1 or 2.0 external hub.

People still use 1.1/2.0 peripherals that don't work well (or at all) with 3.0.

lDDQD
Apr 16, 2006
Mostly for historical reasons - these things used to be very expensive, so both the CPU and every last bit of the main memory were used to do everything. Nowdays, though, both CPU real estate and memory are (relatively) cheap, so what you're suggesting actually makes sense, although I think so far this [that is, reserve one CPU core to only run the operating system and nothing else] has only been attempted in consoles. They're even doing the LITTLE+big CPU core approach in cell phone CPUs.

lDDQD
Apr 16, 2006
They're mostly just memory-hard.

lDDQD
Apr 16, 2006
Or design a better shim so CPUs don't even need a heatspreader.

lDDQD
Apr 16, 2006
Every time someone tries VLIW architectures for anything other than DSPs, it ends up being too much of a pain in the rear end to be worth it.

lDDQD
Apr 16, 2006

movax posted:

Aren’t the Tensilica Xtensas VLIW? Or is more that they are super customizable to add on instructions and people often add VLIW type stuff.

Somedays I wish I worked on projects with big enough budgets where playing with custom SIP is a requirement. Partially out of morbid curiosity as to what the toolchain and collateral is like for those cores and parts.

Until just now, I wasn't even aware Cadence had designed their own processor. After a cursory glance, it seems that it's a RISC CPU, with (optionally) a DSP glued to it - the DSP is obviously VLIW.
What I was alluding to are the famously-terrible Itanium arch, as well as older GPU architectures - they'd all attempted VLIW (with various degrees of success), but eventually gave up.

lDDQD
Apr 16, 2006
In HPC, you tend to do that a lot anyway.

lDDQD
Apr 16, 2006

BangersInMyKnickers posted:

Please don't gently caress your consumer electronics

What are you, my warranty?

Adbot
ADBOT LOVES YOU

lDDQD
Apr 16, 2006
How are Blizzard so poo poo at programming? It's not like they're a small indie studio that can't attract top talent.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply