|
Aren't games FP heavy? And what about preprocessing in the graphics drivers?
|
# ¿ Jan 21, 2011 23:01 |
|
|
# ¿ Apr 19, 2024 19:39 |
|
DaNzA posted:And finally the Abit VP6 that was great with dual P3 933 but had a bunch of bad caps ... loving multicore hipsters. We were at it way back. Actually, I even had a dual P2-350. At some point, Intel dropped consumer multiprocessing and AMD didn't have any offerings. I refused to upgrade away from my own P3-933 until AMD released the Athlon X2.
|
# ¿ Apr 10, 2011 11:54 |
|
Oh, I forgot about the Athlon MP. I can't remember why I didn't go that route.
|
# ¿ Apr 10, 2011 15:46 |
|
That doesn't look anything like other AMD presentations. Anyway, from how I understand it, a BD module is essentially two integer cores and one FP core with a common decoder frontend. So in regards to integer performance, the BD should blow the SB out of the water. Since it'd be two integer pipelines versus one hyperthreaded one (on the 2600K). And in regards to FP, I think the BD pipeline was double width and was supposed to do parallel work for both submodules with 128bit SIMD instructions? I don't think the Intel FP 256bit pipeline can internally parallelize stuff with multiple independent 128bit SIMD instructions. So again, performance should be close. Only with AVX there should be a discernable difference, since BD can either supply a whole set of 128bit registers for each module, or a set of 256bit one for one submodule. Then again, some developers of "math" heavy code don't seem to dig SSE, AVX and the like, because they're float only. So it wouldn't mean much, anyway. For instance, the x264 developer thinks that it's a pile of poo poo (it's mostly integer math).
|
# ¿ May 5, 2011 00:28 |
|
trandorian posted:It could, but the necessity of recompiling everything means that you don't get the legacy app support that allows movement to it the way x86-64 did or even the way various expansions to the instruction set since the original 8086/8088 did. Not to mention there's an awful lot of x86 devices that don't run Windows.
|
# ¿ Oct 22, 2011 15:27 |
|
That certain price point will surely be offset by the higher power bill.
|
# ¿ Oct 24, 2011 11:40 |
|
Alereon posted:HardOCP has an article up comparing overclocked gaming performance between Bulldozer and an i5 2500K. Unsurprisingly, Bulldozer loses, badly. This should shut up those few fanboys who rave about how great it overclocks.
|
# ¿ Nov 5, 2011 17:05 |
|
Mr Chips posted:Maybe I'm not an excitable tech site writer looking for page hits, but 'marginally quicker in some things, a bit slower in others and a bit more power hungry for a given price point' isn't what I'd call a 'catastrophe'.
|
# ¿ Nov 25, 2011 23:26 |
|
There's plenty of rumored going around that the Zen 8-core would be relatively cheap, at least compared to the 8-core HEDTs from Intel. If that's indeed the case, and the Zen is competitive with Intels line-up, I'm toying with the idea of a 2S showboat build. That'd be loving hilarious.
|
# ¿ Nov 24, 2016 14:51 |
|
Intel buying out AMD will never happen. Regulatory bodies won't let this go through. That said, I don't get the idea. AMD probably figured that the iGPU is cannibalizing the low end GPU market, so they'd probably gain more from licensing the tech out. Of course, that'll make the iGPU even better and have it cannibalize the mid end market. So what the hell?
|
# ¿ Dec 6, 2016 21:06 |
|
Live demos are uninteresting, because the conditions are controlled by the OEM. I'm more interested in benchmarks done by third parties.
|
# ¿ Dec 7, 2016 18:13 |
|
Anime Schoolgirl posted:if the cache banks aren't separate that's 16mb of l3 cache for games which means good min framerates if the microcode is more or less similar to intel's
|
# ¿ Dec 13, 2016 18:32 |
|
repiv posted:The Blender test is pretty convenient for AMD since its renderer doesn't have an Embree backend. If you did the same test with RenderMan, V-Ray or UE4 Lightmass I think you'd get a very different result
|
# ¿ Dec 14, 2016 03:32 |
|
If the SR7+ goes for 500 bux, and can hold a candle to the Intel offerings in regards to IPC, I'm in with an 2S board. That said, imma wait a few months and let the fanboys figure out the teething troubles.
|
# ¿ Dec 15, 2016 20:16 |
|
Boiled Water posted:It'll be good enough for most people who buy facebooking laptops that it won't matter.
|
# ¿ Dec 15, 2016 23:09 |
|
Yeah, the SIMD units seem to be the real killers here. Intel CPUs hit the heat ceiling pretty quickly when running a lot of AVX code and have to throttle themselves. If you're overclocking with a lot of voltage offset, no surprise it'll poo poo the bed eventually.
|
# ¿ Dec 18, 2016 16:56 |
|
Looks like I'll be hanging some more onto this overclocked 5820K of mine.
|
# ¿ Dec 23, 2016 20:30 |
|
I'm hoping to score a cheap eight core CPU for rendering out stuff, if the IPC was similar enough and the Zen overclocks well. As it looks, the performance boost isn't big enough for the cost of a switch.
|
# ¿ Dec 23, 2016 20:42 |
|
Twerk from Home posted:Would you have switched to a 6900K if it cost $500 or $600 instead of $1k? From what I can tell, a ~$500 6900K is about the best case we can expect out of the top end Zen SKU, and that would be killer if they did! Also, there's the silicon lottery to consider. I still haven't researched how much drama it is to overclock an 6900K to 4GHz stably. By drama I mean voltage required for a stable overclock. --edit: I think I've misunderstood the question. Either way, I guess it'd be worthwhile to wait for a few more benchmarks. If I read that blurry text correctly, the Zen ran at 3.4GHz turbo, while the 6900K did run at 3.7GHz. If you were scale the benchmark linearly with the clock (which we know doesn't work like that), the Zen would still be 10% short of the Broadwell-E. Combat Pretzel fucked around with this message at 00:17 on Dec 24, 2016 |
# ¿ Dec 24, 2016 00:12 |
|
FaustianQ posted:Clocks are actually 3.15Ghz base and 3.3Ghz Turbo, it's not finalized this was that ES silicon that appeared in the wild like 4 months ago.
|
# ¿ Dec 24, 2016 13:43 |
|
Paul MaudDib posted:My 5820K does 4.13 GHz at stock voltages.
|
# ¿ Dec 26, 2016 02:04 |
|
apropos man posted:Are we saying that the L3 cache is the same, too? Intel just nerf it down on i5's and i3's?
|
# ¿ Dec 27, 2016 17:56 |
|
My swapfile autoconfigured itself to 4.8GB. On a 32GB machine. I don't think there's anything to sweat about keeping it. What would be interesting is some knowledge on how Windows handles memory. On Linux I know that it'll do transparent hugepages and prevents the page table from blowing up when running a lot of RAM. I have no idea what exactly Windows does. Combat Pretzel fucked around with this message at 01:33 on Dec 28, 2016 |
# ¿ Dec 28, 2016 01:29 |
|
mayodreams posted:I am an IT guy and I have NEVER heard about a Windows update resetting file associations. I have 3 Windows 10 physicals and a number of virtuals between home and work.
|
# ¿ Dec 30, 2016 20:11 |
|
Depending on what sort of productivity stuff you're doing, Linux is out, too. I laud what some open source projects do, but they still can't hold a candle to the commercial projects. For some random tinkering, maybe, but not larger projects. I mean, if you don't have the financial means like companies do, you might be poo poo out of luck, but let's face it, usually it's at that point, anyway (even if there's Linux versions of the applications).
|
# ¿ Dec 31, 2016 00:14 |
|
FaustianQ posted:Careful on that IPC guess
|
# ¿ Jan 6, 2017 14:56 |
|
Thunderbolt is a loving travesty on the PC. How long is the standard out? And there's still no/very very few mainboards with a port? Up until recently, you had to install expansion cards that plugged into PCIe and some port of the mainboard.
|
# ¿ Jan 7, 2017 15:26 |
|
Boiled Water posted:I've yet to see a compelling case for thunderbolt that isn't well covered with regular USB C 3.1.
|
# ¿ Jan 7, 2017 17:34 |
|
Something different with the combo USB3/TB ports? If those on my phones are worth as a reference, you gotta be really stupid to not plug it in "hard enough".
|
# ¿ Jan 7, 2017 19:57 |
|
Kazinsal posted:10GBASE-T is terrifying. Insane power consumption, and you get to experience your network cable getting physically warm to the touch. 10GBASE-CR/Direct Attach cables are more expensive (since they're twinaxial cables permanently affixed to a pair of SFP+ transceivers) and have pretty severe length limitations but are much better. --edit: Heh, passive DAC SFP+ is 0.1W. Combat Pretzel fucked around with this message at 17:09 on Jan 9, 2017 |
# ¿ Jan 9, 2017 14:58 |
|
If memory speeds have noticeable influence on frame rates, why wouldn't quad channel?
|
# ¿ Jan 12, 2017 13:09 |
|
The hardware approach makes more sense anyway, because when a CPU generation changes things up a lot internally, you'd have to recompile just about everything to keep the performance going. Probably doesn't matter for word processors and such, but anything beyond that? Games needing at least two builds of the executables to span at least a one architecture improvement, media creation people not wanting to upgrade software would get shafted, and so on. Whereas if it happens on the CPU, it's mostly transparent, although there's still some rather minor performance advantages to be had, when playing with the peculiarities of a processor architecture.
|
# ¿ Jan 14, 2017 13:23 |
|
Eh, I've gotten DDR4-3000 RAM for my 5820K, so that I can run it at 2400MHz without bumping the voltage to 1.35V and also run tight timings. Might be applicable to Zen, too.
|
# ¿ Jan 15, 2017 19:56 |
|
Uh, since when do games care about or "utilize" the amount of memory channels? Unless the game looks up system information, it's none the wiser about the memory layout.
|
# ¿ Jan 16, 2017 16:20 |
|
ohgodwhat posted:Whenever they read things from memory? Point was, games don't give a poo poo about memory architecture on the PC. There's no code suddenly deciding to throttle the framerate because oops there's dual or quad channel. Neither is there code that tries to guess the interleaving and align things accordingly. Just because performance considerations were made for console systems, which happen to have single channel memory and related bandwidth caps, doesn't mean there isn't anything to be had on higher end systems. The case of FO4 shows that there's situations where you're bandwidth and not CPU limited (--edit: Yes, bandwidth, because default memory timings scale almost linearly with frequencies, so latency doesn't really change). Probably applies to more games that stream or generate world geometry, plus related assets, on the fly. Combat Pretzel fucked around with this message at 17:00 on Jan 16, 2017 |
# ¿ Jan 16, 2017 16:57 |
|
Oh gee, 16 lanes only. Totally didn't consider that one. I love my NVMe SSD and my 10GbE adapter, so I guess I'm out for the time being.
|
# ¿ Feb 3, 2017 06:30 |
|
Arzachel posted:As far as I know, x370 has 16x 3.0 lanes for graphics, 4x 3.0 for NVMe and 4x 3.0 and 8x 2.0 general purpose lanes, so you should be fine unless you also want 6 SATA and 2 SATAe ports for some reason. On the other hand, cheap 8C/16T. Given the distinction in available PCIe lanes and memory channels, I don't expect Intel to drop the prices on their 8C/16T anytime soon. --edit: Oh the SATA lanes can double as PCIe. Combat Pretzel fucked around with this message at 09:07 on Feb 3, 2017 |
# ¿ Feb 3, 2017 08:56 |
|
I love how they dismiss the amount of cores as a performance factor, just to end the paragraph highlighting the availability of a 10-core of their own. Did they issue something like that before, whenever AMD released a CPU?
|
# ¿ Feb 8, 2017 03:44 |
|
EdEddnEddy posted:Also I can confirm from random visits into the verse, that Star Citizen does like lots of cores (4-6 separate cores per Task Manager?), but the old SB-E does handle it quite well. Though the games current limitation really is the storage transfer speed. IF you run it not on an SSD you are never going to see it run smooth. I bet it could even saturate a freaking PCI-E SSD but I don't have one to test with.
|
# ¿ Feb 9, 2017 19:07 |
|
|
# ¿ Apr 19, 2024 19:39 |
|
MaxxBot posted:This looks really good, the $389 1700X is barely slower than the 6900k.
|
# ¿ Feb 12, 2017 04:05 |