Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It's marketing. Pure "bigger numbers = better than." For a tenuous justification: The -E variants are released later, closer to the release of the die-shrink than the release of the original uarch. Add to that that they are better chips (in the sense of numbers, if not necessarily benchmarks), and therefore they're "next-gen."

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

FFactory, you mentioned that enabling the integrated GPU for Haswell only carries like a 10-15W penalty? Any idea how Intel is doing in terms of perf/watt? Trying to bone up on my IGPU knowledge since I kinda feel like we'll be looking more at nVidia vs. Intel in the coming efficiency wars, and even if the perf isn't quite there yet, that is a TINY amount of power to be able to run games as well as they do (considering). It only seems like it's gonna come out of nowhere when they become genuinely competitive because it's so unusual to associate "Intel" with "Actually quite good graphics, really." They've been making amazing improvements, and they really, really nailed 14nm better than anyone else. Their fins are rectangular! Did you SEE how tiny those interconnects are getting? :3:

I could back-of-the-napkin about it...

Haswell GT3 is about 174mm2. That review tests it at both 47W (full chip) and 55W (ditto). Sometimes the extra 8W gives more performance, i.e. when the CPU is being stressed, and sometimes it doesn't. Specific benchmarks are hard to generalize because Gen 7 HD Graphics is architecturally more distinct than GCN and Fermi et al. are from each other. In particular, HD Graphics is extremely shader-heavy, giving results like this:




But also like this:



So there's not going to be one performance per watt. There's also more that makes the comparison apples-to-oranges: that 10-15W is just the GPU (and, for the record, it's an old figure from Sandy Bridge). It doesn't include RAM or power delivery overhead, which can take up to 50% of the board power on a modern card.

So comparing that 10-15W to a generally-similar-performing GeForce GT 640 (GDDR5) at 49W max isn't the whole story. On one hand, if you included the motherboard VRMs and system's DDR3 SDRAM necessary to enable the GPU, you'd likely come up with something a lot closer to 49W than 10W. But on the other hand, the GT 640 also needs the motherboard VRMs and system RAM to produce useful work.

So I'd have to estimate anywhere from "in line" to "great, including structural advantages."

Certainly doesn't have a die size advantage, though - the GeForce GT 640's GK107 is about 118 mm2, about 2/3 the size.

Factory Factory fucked around with this message at 07:03 on Sep 28, 2014

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Ika posted:

So apparently while the onboard graphics of Z97 are HDCP compliant, they don't do mpeg4 decoding.... yay.

Yes it does? What's the source?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
There's an Intel response saying it's a known issue, though, in a similar post I Googled.

So HD Graphics definitely does accelerate h.264 decoding, and that implies MPEG4, because h.264 is MPEG4 Part 10 AVC. I can play Blu-rays fine on my HD Graphics parts (two PCs with HD 3000 Gen6 and one tablet with HD Graphics Gen 7 Ivy/Haswell-era), and they play my old WMC recordings fine, too (though not the premium channel ones - that PC's playback credentials got horked).

I guess we gotta bona fide mystery on our hands.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Arcsoft dropped support for TotalMedia Theatre. It's now abandonware.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

Generally the right to circumvent that sort of invasive copy protection for the purpose of personal backup is guaranteed by consumer protections, when it's guaranteed anyway, or at least that's my understanding.

I thought you were an Australian; am I wrong about the strong consumer protections there, or about your nationality, or just about whether that means anything in this discussion? :shobon:

The right to make a copy is a core part of copyright law's Fair Use doctrine, but there is no fair use exception to the DMCA or Australia's equivalent law (both of which were implementation laws for international copyright treaties). Circumventing copy protection is a legal DO NOT DO, and you have no legal right to do it except for a tiny subset of Fair Use cases. That this directly contradicts Fair Use generally has not been litigated.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Well, for one thing, it's stopping me from giving up and ripping my Blu-Ray collection so I can be done with this junk...

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

SwissCM posted:

Get AnyDVD HD and your choice of encoder/ripper and you're good.

I am a law student. I have to take the bar in a year or two. I would call it "a very stupid idea" to knowingly violate Federal law.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Lynnfield to Sandy Bridge was about 15% isoclock with significant clock bumps. Sandy to Ivy was about 5% isoclock with minor clock bumps at high-end, higher sustainable turbo clocks at low-end. Ivy to Haswell was about 10% isoclock with minor clock bumps at high end, higher sustainable turbo clocks at low-end.

Haswell's power-constrained improvements over Ivy were better than Ivy's improvements over Sandy.

Clock bumps refer to stock frequency.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Bestest-y-estest is the Core i7-4790K, but only because of high stock clocks and hyperthreading. If you overclocked and didn't need hyperthreading, a lightly juiced i5-4690K would be as good 99.9% of the time (the 0.1% being the difference in L3 cache and mostly restricted to financial or scientific apps).

Rendering will see a healthy boost from hyperthreading, though.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I'm pretty sure I neologized it because I was sick of typing "at the same clock speed" and "at the same voltage" in the overclocking thread.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Equipotent sounds like the name of a boner drug.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Hypothetically you'd use such a CPU for a low-impact virtualization server (think a place selling personal web server VMs) or a larger NAS/entry SAN. Which is exactly where 8-core Avoton SoCs (Silvermont Atom cores) are used, e.g. the ASRock C2750D4I, a mini-ITX server board with 12 SATA ports and 4 DIMM slots supporting up to 64 GB of ECC DDR3 (plus a PCIe x8 slot for good measure).

Actually, Jaguars and Atoms are so wimpy compared to Haswell that even a Haswell Core i3 can do all the work that the 8 cores in the Xbox One and PS4 can, and then some. There may be some where the 8 real cores edge out 2 buff hyperthreaded cores, but I'd guess that'd probably be an edge case.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Tahirovic posted:

So I am shopping for a new gaming rig based around a GTX980 and I was stupid enough to consider a 5930K over a 4790K, after reading this thread I am leaning towards a 4790K now. I am mostly wondering how power consumption and heat generation of the two compare when overclocked, is there gonna be much difference? Is one of the two easier to OC while keeping it quiet?

Does anyone have any personal reviews/an opinion on the Corsair Hydro Series coolers? (might be outside the scope of this thread, sorry if it is)

Heat will be higher on the 5930K, way higher. You can easily add 60% to 100% power draw, vs. ~50% on a 4790K. We're talking up-to-300W on the 5930K you need to dissipate (24/7-safe will be more like 250W though), vs. 150W on a 4790K pushed just as hard. So you need to get rid of a SHITLOAD of heat. That's not happening quietly without some really expensive gear, like the all-copper radiator on the Cooler Master Glacer 240L CLC.

Side note: Is the 5930K really the right choice here? The 5820K is all the CPU, just with 28 PCIe lanes instead of 40. 28 PCIe lanes is plenty for two-way SLI and an upcoming hot-poo poo m.2 drive.

As for Corsair Hydro coolers... They're Asetek OEM. Copper water block, aluminum radiator. Good kit, though really similar kit to any other Asetek CLC like NZXT Krakens or a number of other brands. But the fans are often just... okay. Corsair really wants you to feel like you get good results buying and installing their SP120s the high-static-pressure fan of your choice.

I would be really hesitant to get anything smaller than an H110 or Kraken X60 for a Haswell-E overclock.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
AnandTech's benchmarks said 1% to 19%, 8.3% average, over Ivy Bridge at the same clocks, though that didn't control for differences in thermal performance.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
SP120 is good. Or just cut to the chase and get a pair of Noctua NF-F12 PWM.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It could go either way. Going from 2.66 GHz to 4 is a significant boost and will be a significant reduction in the amount of suck that can be attributed to your CPU. That is not as fast as a stock Haswell i5/i7 (depending on what tasks you're comparing), and nowhere near an overclocked Haswell quad-core, but it's a lot closer than stock clocks. A 4 GHz overclock might let you tough it out until 2015's Skylake (with Broadwell being Haswell's die-shrink, due soon in mobile and less soon on the desktop).

Factory Factory fucked around with this message at 14:40 on Oct 20, 2014

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Timeframes seem to be getting a bit weird.

According to TR, according to rumors, Broadwell-E has been pushed back to sample in 4Q 2015 for a 2016 launch. Meanwhile, same article, Skylake-S is sampling already. The Google Translation is really shabby, but apparently the parts out there are 2.3 GHz base/2.9 GHz Turbo at 95W, with an estimated launch sometime in 2015 and notebook parts in 4Q15.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
That's bonkers. That's like an 85% discount. Did they fall off a truck?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

No Gravitas posted:

Are there any alternatives for people who want a ton of integer cores?

Congrats, you're the one person in a million who should be buying an AMD FX-83xx or a similar Opteron. Unless TCO matters more than hardware cost, in which case cram Xeons all up ons.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Hace posted:

Wouldn't an i3 still be better for general multitasking and the like?

Yes, the same way an i7 would be over an i5 - anything but the most intensive use or specific tasks will never show the difference.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

GokieKS posted:

Ugh, seems like Dragon Age: Inquisition just flat out refuses to work on a dual-core machine regardless of how fast those cores may be.

What, really? I've seen reports of people having huge problems, but also people successfully playing the game on e.g. mobile i5s, which are dual core.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

The French Army! posted:

I've got an older Ivy Bridge Celeron G1610 on an MSI B75MA-E33 that I purchased when it was the new Celeron on the bock, figuring I'd put in a better CPU somewhere down the line. Never did, and now I don't really feel like spending money on a new old processor for this old board, but I'm also not in the market for a new rig until later next year whenever Skylake hits. In the mean time bumping this old Celeron up to 3.0GHz to squeeze a little more life out of her should be doable on the stock cooler right?

Sure! Plenty of thermal headroom. Except good luck overclocking a locked-multiplier CPU on a platform that shits itself if the reference clock varies by more than a few percent off 100 MHz.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Even 2000 MB/s SSDs don't stand up to DRAM. Dual channel DDR3-1600 is worth 25 600 MB/s peak. Corsair did some synthetic benchmarking of DDR3 vs. DDR4 speeds, and long story short, quad-channel DDR4 systems are playing with around 60 000 MB/s of bandwidth, albeit at slower latencies than dual-channel controllers.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

ElehemEare posted:

This being said, the last few generations of uArch changes have mainly given us greater efficiency. I'm running an i5-750 that still mostly chugs along well. Do we have any rational expectation that the Skylake uArch changes will have tangible benefits (for the mainstream gamer) that will outweigh the necessity of UniDIMM DDR3/DDR4 upgrades, in addition to mobo/CPU, for single GPU setups?

Keeping in mind that everything about Skylake is basically rumors right now:

  • DirectX 12 can allow heavy increases in CPU use to allow fuller rendering efficiency, such that there is a marked difference between dual core Haswell and quad core at the extremes of the API's capability. This suggests that the clock-and-uarch differences between Haswell and and your i5-750 (even if overclocked) can make a difference to the worst-case extremes in terms of real frames per second. On a pre-release API with pre-release drivers on a pre-release OS with basically one synthetic benchmark and no actual games yet.
  • Finally getting TSX instructions to work should help the performance of such highly-threaded applications even when they are not especially optimized with fine-precision locking.
  • Moving to 20 PCIe lanes means you can do 2-way SLI while still using a fancy PCIe SSD without jumping to an Extreme board/CPU/RAM. Which doesn't count for single-GPU gamers, but I guess it means you can still have x8 for your GPU even if you load up RAID cards and PCIe SSDs and crazy poo poo.

There are some other rumored uarch differences, but none that look relevant for gaming as long as you aren't rendering on the CPU.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
You dinguses talking motherboard audio should go to the audiophile ridicule thread. S/PDIF is a digital connector. There's no loving "jitter," all the processing is through the Windows software sound mixer and comes out bit-perfect identical whether it's onboard S/PDIF, split off HDMI, USB to S/PDIF audio interface, or a $500 internal sound card's S/PDIF interface. Digital. The output is 1 or 0. It can't jitter. The digital interfaces on the card completely skip the DAC and any component which can produce any noise of any variety.

The only thing that the sound card and driver can do is up-mixing from stereo to surround (or 5.1 to 7.1 etc.) for content that isn't natively multi-channel.

atomicthumbs posted:

How do I get Quick Sync Video to work on my 4790K? I've made sure the onboard GPU is enabled and I installed the Intel drivers, but nothing that uses QSV seems to work.

In the BIOS/UEFI setup, is iGPU Multi-Monitor set to "Enabled?" If it already is and it's still not working, try hooking up a monitor cable (like hook a DVI to another input on your screen or whatnot). No need to actually put an image on screen via the Windows multi-monitor settings, but it used to be a requirement for enabling QuickSync and might be worth trying as a troubleshooting step.

Alternatively, if you have an Nvidia video card, just use NVEnc - it's pretty good, too.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
But that said, on Haswell it is indeed L4 cache, and it probably will be on Skylake, as well.

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Cores plus GPU slices? I recall the rumors that all the high-end desktop Skylakes have Iris graphics.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply