Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Aren't games FP heavy? And what about preprocessing in the graphics drivers?

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

DaNzA posted:

And finally the Abit VP6 that was great with dual P3 933 but had a bunch of bad caps :sigh: ...
Dual CPU crew represent!

loving multicore hipsters. We were at it way back. Actually, I even had a dual P2-350. At some point, Intel dropped consumer multiprocessing and AMD didn't have any offerings. I refused to upgrade away from my own P3-933 until AMD released the Athlon X2.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Oh, I forgot about the Athlon MP. I can't remember why I didn't go that route.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
That doesn't look anything like other AMD presentations.

Anyway, from how I understand it, a BD module is essentially two integer cores and one FP core with a common decoder frontend. So in regards to integer performance, the BD should blow the SB out of the water. Since it'd be two integer pipelines versus one hyperthreaded one (on the 2600K).

And in regards to FP, I think the BD pipeline was double width and was supposed to do parallel work for both submodules with 128bit SIMD instructions? I don't think the Intel FP 256bit pipeline can internally parallelize stuff with multiple independent 128bit SIMD instructions. So again, performance should be close. Only with AVX there should be a discernable difference, since BD can either supply a whole set of 128bit registers for each module, or a set of 256bit one for one submodule.

Then again, some developers of "math" heavy code don't seem to dig SSE, AVX and the like, because they're float only. So it wouldn't mean much, anyway. For instance, the x264 developer thinks that it's a pile of poo poo (it's mostly integer math).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

trandorian posted:

It could, but the necessity of recompiling everything means that you don't get the legacy app support that allows movement to it the way x86-64 did or even the way various expansions to the instruction set since the original 8086/8088 did. Not to mention there's an awful lot of x86 devices that don't run Windows.
I wouldn't be surprised if someone came up with a assembly level recompiler. Seeing how W8 tablets are running Windows and can be locked down to hell with GPOs, I figure that enterprises would rather want Windows tablets going around instead of iPads. Even if the locking down isn't factored in, say in a case of BYOD, having a Windows tablet opens the possibility to run your an enterprises ancient crap on it, making the case for a recompiler since I'm sure a lot of software is so ancient, their coders lived in pyramids. After all, only the APIs need to be there, not the CPU architecture per se.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
That certain price point will surely be offset by the higher power bill.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Alereon posted:

HardOCP has an article up comparing overclocked gaming performance between Bulldozer and an i5 2500K. Unsurprisingly, Bulldozer loses, badly. This should shut up those few fanboys who rave about how great it overclocks.
Holy crap. Are they going to blame that on scheduler problems? I wonder how that ARMA benchmark would look like with a 2600K in play, for 4C8T SMT.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr Chips posted:

Maybe I'm not an excitable tech site writer looking for page hits, but 'marginally quicker in some things, a bit slower in others and a bit more power hungry for a given price point' isn't what I'd call a 'catastrophe'.
Well, it's Peter Bright who wrote the article. Judging his shtick on the forums, he absolutely loathes AMD.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
There's plenty of rumored going around that the Zen 8-core would be relatively cheap, at least compared to the 8-core HEDTs from Intel. If that's indeed the case, and the Zen is competitive with Intels line-up, I'm toying with the idea of a 2S showboat build. That'd be loving hilarious.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Intel buying out AMD will never happen. Regulatory bodies won't let this go through.

That said, I don't get the idea. AMD probably figured that the iGPU is cannibalizing the low end GPU market, so they'd probably gain more from licensing the tech out. Of course, that'll make the iGPU even better and have it cannibalize the mid end market. So what the hell?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Live demos are uninteresting, because the conditions are controlled by the OEM. I'm more interested in benchmarks done by third parties.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Anime Schoolgirl posted:

if the cache banks aren't separate that's 16mb of l3 cache for games which means good min framerates if the microcode is more or less similar to intel's
Based on the supposed die picture, the cache seems to be split in two. Four cores per half.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

repiv posted:

The Blender test is pretty convenient for AMD since its renderer doesn't have an Embree backend. If you did the same test with RenderMan, V-Ray or UE4 Lightmass I think you'd get a very different result :v:
I don't get this Embree thing. I guess it's a cheap way to quickly get the max out of multiple generations of Intel CPUs, but Arnold Renderer seems to do just fine without it (if you put V-Ray GI into pure bruteforce mode, its speed goes to poo poo, too).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If the SR7+ goes for 500 bux, and can hold a candle to the Intel offerings in regards to IPC, I'm in with an 2S board.

That said, imma wait a few months and let the fanboys figure out the teething troubles.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Boiled Water posted:

It'll be good enough for most people who buy facebooking laptops that it won't matter.
Those probably don't need x86 emulation to begin with. If Microsoft finally allows random Win32 apps, software companies can just spin ARM builds. Like Google and Mozilla.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Yeah, the SIMD units seem to be the real killers here. Intel CPUs hit the heat ceiling pretty quickly when running a lot of AVX code and have to throttle themselves. If you're overclocking with a lot of voltage offset, no surprise it'll poo poo the bed eventually.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Looks like I'll be hanging some more onto this overclocked 5820K of mine.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm hoping to score a cheap eight core CPU for rendering out stuff, if the IPC was similar enough and the Zen overclocks well. As it looks, the performance boost isn't big enough for the cost of a switch.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Twerk from Home posted:

Would you have switched to a 6900K if it cost $500 or $600 instead of $1k? From what I can tell, a ~$500 6900K is about the best case we can expect out of the top end Zen SKU, and that would be killer if they did!
You have to consider that I already have an LGA 2011-3 mainboard that I can fit the 6900K into, where as with the Zen I'd also need to get a new mainboard. So one question is whether Intel would drop the price to match the cost of a Zen+Mainboard or less. But we're talking about Intel here, it might not even happen.

Also, there's the silicon lottery to consider. I still haven't researched how much drama it is to overclock an 6900K to 4GHz stably. By drama I mean voltage required for a stable overclock.

--edit: I think I've misunderstood the question.

Either way, I guess it'd be worthwhile to wait for a few more benchmarks. If I read that blurry text correctly, the Zen ran at 3.4GHz turbo, while the 6900K did run at 3.7GHz. If you were scale the benchmark linearly with the clock (which we know doesn't work like that), the Zen would still be 10% short of the Broadwell-E.

Combat Pretzel fucked around with this message at 00:17 on Dec 24, 2016

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

FaustianQ posted:

Clocks are actually 3.15Ghz base and 3.3Ghz Turbo, it's not finalized this was that ES silicon that appeared in the wild like 4 months ago.
Hmmm, this gets it within 4% of the 6900K, assuming it scales linearly from 3.3 to 3.7GHz.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Paul MaudDib posted:

My 5820K does 4.13 GHz at stock voltages.
Mine runs at 4GHz with 1.1V. It didn't run stable at stock. Then again, in retrospect, I think the first attempt I've also ramped the cache ratio up to 40, which I haven't done again.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

apropos man posted:

Are we saying that the L3 cache is the same, too? Intel just nerf it down on i5's and i3's?
Isn't the L3 cache per core, but shared via ring bus? I'd figure disabling a core does the same to the cache.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
My swapfile autoconfigured itself to 4.8GB. On a 32GB machine. I don't think there's anything to sweat about keeping it.

What would be interesting is some knowledge on how Windows handles memory. On Linux I know that it'll do transparent hugepages and prevents the page table from blowing up when running a lot of RAM. I have no idea what exactly Windows does. Huge Large page tables aren't helping performance.

Combat Pretzel fucked around with this message at 01:33 on Dec 28, 2016

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

mayodreams posted:

I am an IT guy and I have NEVER heard about a Windows update resetting file associations. I have 3 Windows 10 physicals and a number of virtuals between home and work.
I once had Windows go nuts and make me confirm every file association whenever I opened a certain filetype for the first time, i.e. that stupid Choose App dialog every drat time. Happened after the goddamn Anniversary update.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Depending on what sort of productivity stuff you're doing, Linux is out, too. I laud what some open source projects do, but they still can't hold a candle to the commercial projects. For some random tinkering, maybe, but not larger projects. I mean, if you don't have the financial means like companies do, you might be poo poo out of luck, but let's face it, usually it's :filez: at that point, anyway (even if there's Linux versions of the applications).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
What does that even mean? Versus stock 6950X or overclocked?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Thunderbolt is a loving travesty on the PC. How long is the standard out? And there's still no/very very few mainboards with a port? Up until recently, you had to install expansion cards that plugged into PCIe and some port of the mainboard.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Boiled Water posted:

I've yet to see a compelling case for thunderbolt that isn't well covered with regular USB C 3.1.
I was mostly interested in it a while ago for cheap higher-than-Gigabit networking. But alas...

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Something different with the combo USB3/TB ports? If those on my phones are worth as a reference, you gotta be really stupid to not plug it in "hard enough".

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Kazinsal posted:

10GBASE-T is terrifying. Insane power consumption, and you get to experience your network cable getting physically warm to the touch. 10GBASE-CR/Direct Attach cables are more expensive (since they're twinaxial cables permanently affixed to a pair of SFP+ transceivers) and have pretty severe length limitations but are much better.
Means if I were to go with SFP, I'd not have a new heating element between my computer and the NAS? I think there's 5m as SFP DA cables, which should be sufficient in my case. Sadly I haven't found cheap used cards that work in FreeNAS.

--edit: Heh, passive DAC SFP+ is 0.1W.

Combat Pretzel fucked around with this message at 17:09 on Jan 9, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If memory speeds have noticeable influence on frame rates, why wouldn't quad channel?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The hardware approach makes more sense anyway, because when a CPU generation changes things up a lot internally, you'd have to recompile just about everything to keep the performance going. Probably doesn't matter for word processors and such, but anything beyond that? Games needing at least two builds of the executables to span at least a one architecture improvement, media creation people not wanting to upgrade software would get shafted, and so on. Whereas if it happens on the CPU, it's mostly transparent, although there's still some rather minor performance advantages to be had, when playing with the peculiarities of a processor architecture.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Eh, I've gotten DDR4-3000 RAM for my 5820K, so that I can run it at 2400MHz without bumping the voltage to 1.35V and also run tight timings. Might be applicable to Zen, too.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Uh, since when do games care about or "utilize" the amount of memory channels? Unless the game looks up system information, it's none the wiser about the memory layout.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

ohgodwhat posted:

Whenever they read things from memory?
High-lah-rious!

Point was, games don't give a poo poo about memory architecture on the PC. There's no code suddenly deciding to throttle the framerate because oops there's dual or quad channel. Neither is there code that tries to guess the interleaving and align things accordingly. Just because performance considerations were made for console systems, which happen to have single channel memory and related bandwidth caps, doesn't mean there isn't anything to be had on higher end systems. The case of FO4 shows that there's situations where you're bandwidth and not CPU limited (--edit: Yes, bandwidth, because default memory timings scale almost linearly with frequencies, so latency doesn't really change). Probably applies to more games that stream or generate world geometry, plus related assets, on the fly.

Combat Pretzel fucked around with this message at 17:00 on Jan 16, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Arzachel posted:

As far as I know, x370 has 16x 3.0 lanes for graphics, 4x 3.0 for NVMe and 4x 3.0 and 8x 2.0 general purpose lanes, so you should be fine unless you also want 6 SATA and 2 SATAe ports for some reason.
I'm split on all that. If I'm going with an overclockable system, I'd like to use all devices to their maximum potential. Sharing lanes between GPU and 10GbE, or running NVMe and 10GbE over the southbridge, kind of feels funny. Sure, the network adapter is the odd one out here, but my NAS can shovel 3-4 gigabit over there pipe on a cold cache, so it has its use (and I have yet to install the cache SSD).

On the other hand, cheap 8C/16T. Given the distinction in available PCIe lanes and memory channels, I don't expect Intel to drop the prices on their 8C/16T anytime soon.

--edit: Oh the SATA lanes can double as PCIe.

Combat Pretzel fucked around with this message at 09:07 on Feb 3, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I love how they dismiss the amount of cores as a performance factor, just to end the paragraph highlighting the availability of a 10-core of their own.

Did they issue something like that before, whenever AMD released a CPU?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

EdEddnEddy posted:

Also I can confirm from random visits into the verse, that Star Citizen does like lots of cores (4-6 separate cores per Task Manager?), but the old SB-E does handle it quite well. Though the games current limitation really is the storage transfer speed. IF you run it not on an SSD you are never going to see it run smooth. I bet it could even saturate a freaking PCI-E SSD but I don't have one to test with.
A really terribly implemented game isn't exactly a datapoint worth considering, merely as an edge case to present the effect of having significant reserves.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

MaxxBot posted:

This looks really good, the $389 1700X is barely slower than the 6900k.

http://wccftech.com/amd-ryzen-7-1700x-389-8-core-cpu-benchmarks-leaked/
What's the reason for the fails in the Prime and Physics benchmarks?

  • Locked thread