Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PerrineClostermann
Dec 15, 2012

by FactsAreUseless

Alereon posted:

The 45nm C2Ds were a lot better, the originals were often paired with P4 chipsets and slow DDR2 and that was not a good combination. A 4-series chipset with AHCI support and DDR2 800+ is in a much better position today.

Was the e6750 one of them? I think it was a 65nm...

Adbot
ADBOT LOVES YOU

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
whats up with the not-power-of-2 stuff this round, does that not have to create some mismatch at other layers

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

PerrineClostermann posted:

Was the e6750 one of them? I think it was a 65nm...
Nope, 45nm was 7000-series and up, though the 6750 wasn't that bad since it had a 1333Mhz FSB, 4MB of L2, and was 2.67Ghz. If you got one in late '07 and paired it with a 3-series chipset board with an ICH9R southbridge (with AHCI) that was a good system. If you got one in late '06 or early '07 with a P965 chipset and ICH7, or especially if you cheaped out an got an E4400 with an 800MHz FSB, 2MB of L2, and a 2Ghz clock, that was a much worse experience.

To add bonus confusion, there was also a line of 45nm Pentium Dual-Cores that shared the E6xxx model lineup. The Pentium E6700 had a 1066Mhz FSB, 2MB of L2 cache, and was clocked at 3.2Ghz, which made it a surprisingly good value.

I've got the Core 2 Quad Q9550, which packed an astonishing 12MB of L2 cache. This has helped it hold up very well compared to newer processors with integrated memory controllers, I have it overclocked to 3.6Ghz and I generally beat a stock Core i5 2500K in most benchmarks. There are cases where it just falls over and I desperately wish for a new CPU, though. I'm gonna like tripling my effective memory bandwidth.

StabbinHobo posted:

whats up with the not-power-of-2 stuff this round, does that not have to create some mismatch at other layers
Nah they designed it this way from the ground up, three 5-core tiles.

Alereon fucked around with this message at 05:03 on Feb 22, 2014

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Alereon posted:

astonishing 12MB of L2 cache.

Famously, this made Anand (of AnandTech fame) question the Intel engineers with the Nehalem architecture, because it lacked cache.

I had a Q9550 too, it was a great chip (an engineering sample), and worked just fine at 3.8GHz. Well, at least it went to a good home (younger brother).

Fats
Oct 14, 2006

What I cannot create, I do not understand
Fun Shoe

Straker posted:

C2Ds are pretty lovely, I had something overclocked to 3.2GHz that could barely transcode video in realtime, now with a 2500K I can transcode video and play bf4 or whatever at the same time no problem. Nothing out now is much better than a 2500K though, kinda sad.

I've had the same i7-920 for 5(!) years now, and I've yet to feel like I needed more. Considering it's been running at 4GHz since I got it, I imagine it'll die a terrible electrical death before it's really obsolete.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
12MB? drat. I think the 2600k only has 8MB.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

HalloKitty posted:

Famously, this made Anand (of AnandTech fame) question the Intel engineers with the Nehalem architecture, because it lacked cache.

I had a Q9550 too, it was a great chip (an engineering sample), and worked just fine at 3.8GHz. Well, at least it went to a good home (younger brother).

PerrineClostermann posted:

12MB? drat. I think the 2600k only has 8MB.
And that's L2 cache too, not L3 like on the modern CPUs. I think this is the article HalloKitty was mentioning, up through the Core 2 days Intel used L2 caches that were shared between the cores, optimizing how efficiently data was packed into the L2 cache. The downside is that larger caches have higher latency, which kind of defeats the purpose of having a cache. Intel decided that the best thing was to give each core its own smaller, private L2 cache, and add a larger shared L3 cache to handle applications where 256KB just isn't enough. It does seem like we're starting to reach the point today where 512KB gets to be the optimal L2 cache size, so maybe that will happen on future CPUs, but what do I know.

ShaneB
Oct 22, 2002


Ignoarints posted:

I'd hope for more 4.6 4.7. Seems reasonable based on what other people have gotten. Some have gotten 4.8+ but with a better cooler than I have.

So yes

I have a delidded CLP'ed watercooled 4670K that won't stabilize past 4.5ghz. I haven't jacked the volts crazy but temps on non synthetics like x264 reach mid 60s and synthetics go high 70s so that's about where I'm stopping. It's really luck of the draw more than temp controlling.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Alereon posted:

And that's L2 cache too, not L3 like on the modern CPUs. I think this is the article HalloKitty was mentioning, up through the Core 2 days Intel used L2 caches that were shared between the cores, optimizing how efficiently data was packed into the L2 cache. The downside is that larger caches have higher latency, which kind of defeats the purpose of having a cache. Intel decided that the best thing was to give each core its own smaller, private L2 cache, and add a larger shared L3 cache to handle applications where 256KB just isn't enough. It does seem like we're starting to reach the point today where 512KB gets to be the optimal L2 cache size, so maybe that will happen on future CPUs, but what do I know.

That particular AnandTech article hinted at the full reasons why Intel made that choice, but wasn't clear about it.

It's not just size which hurts latency, it's also associativity (number of "ways") and access ports. The benefit of increased associativity is better hit rate, and access ports have to scale with the number of CPU cores accessing a cache.

Intel designed Core 2 for only two CPU cores per chip. (Core 2 Quad processors are actually two dual-core chips mounted in a multichip package.) Because they only needed to share the last layer cache (or LLC) between two cores, they were able to be very aggressive on its size, associativity, and place in the hierarchy.

Nehalem jumped up to 4 cores per chip, doubling the number of access ports required. They also needed to bump the LLC size to 8MB in order to keep it at Intel's preferred minimum of 2MB LLC capacity per core. Doing both these things together would have made a Core 2 style L2 LLC too slow. You can kind of get a feel for it in the numbers:

Core 2 Penryn (45nm):
Per core (2 copies): 32KB 4-way instruction + 32KB 8-way data L1, 3 cycle latency
Shared: 6MB 24-way 2-port L2, 15 cycle latency

Nehalem (45nm):
Per core (4 copies): 32KB 4-way instruction + 32KB 8-way data L1, 4 cycle latency
Per core (4 copies): 256KB 8-way L2, 11 cycle latency
Shared: 8MB 16-way 4-port L3, 35+ cycle latency (sources seem to vary on this number)

That said, a hypothetical Nehalem design with a L2 LLC probably could have done better than 35 cycle latency. In the real Nehalem, thanks to the fast private L2 caches, L3 performance wasn't as critical and Intel was able to optimize it to reduce power use. In a Core 2 style design, the L2 LLC has to be very fast since it's the only thing between L1 and DRAM.

Note also that the Nehalem L1 latency grew by 1 cycle. They probably needed that to target higher clock speeds and perhaps power reduction (a likely need when doubling the cores per chip). This probably put more pressure on reducing L2 latency, which would have pushed them towards the 3-level design.

CPU design involves an insane number of engineering tradeoffs, all entangled.

BobHoward fucked around with this message at 10:39 on Feb 22, 2014

Straker
Nov 10, 2005

Fats posted:

I've had the same i7-920 for 5(!) years now, and I've yet to feel like I needed more. Considering it's been running at 4GHz since I got it, I imagine it'll die a terrible electrical death before it's really obsolete.
Those are good enough CPUs too. I wish there were something worth upgrading to, I'm getting like... bored of my Z68 pro and 2500K, almost three years now and there's nothing meaningful to upgrade to. I want more Intel SATA ports and more cores for less than four figures :(

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah it sucks to know that even though there's tons of desktop enthusiasts pining to upgrade their sandy bridge, we're just too small and too niche of a market for anyone to give a gently caress anymore. Haswell-E will provide what you want (more cores for under 4 figures) but I'm assuming it will basically just be a 6-core version of the 4770k for $600 and a 8-core version for $1000.

PUBLIC TOILET
Jun 13, 2009

So it sounds like the next series of Xeons will mean the end of a budget-priced Xeon processor?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

PUBLIC TOILET posted:

So it sounds like the next series of Xeons will mean the end of a budget-priced Xeon processor?

Not sure where you're getting that from, unless you're mistaking recent discussion of new E7 series Xeons as representing all Xeons. Roughly speaking, E3 = budget 1-socket, E5 = midrange 1-2 socket, E7 = high end 2-8 socket.

Navaash
Aug 15, 2001

FEED ME


Straker posted:

Those are good enough CPUs too. I wish there were something worth upgrading to, I'm getting like... bored of my Z68 pro and 2500K, almost three years now and there's nothing meaningful to upgrade to. I want more Intel SATA ports and more cores for less than four figures :(
I would just be happy with more SATA6.0 ports if it weren't for the fact that add-in SATA cards are either absolute garbage, overpriced to hell, or both. (2500K/P67 here)

edit: I should have mentioned I'm in Japan so my options are limited to what I can pick up in Nipponbashi

Navaash fucked around with this message at 03:01 on Feb 23, 2014

SamDabbers
May 26, 2003



Navaash posted:

I would just be happy with more SATA6.0 ports if it weren't for the fact that add-in SATA cards are either absolute garbage, overpriced to hell, or both. (2500K/P67 here)

Pick up an IBM M1015/M1115 which is an 8-port non-poo poo PCIe (x4) SATA/SAS 6Gbps adapter, and a favorite in the NAS thread for ZFS setups. They go for under $100 on eBay; I picked one up for $70 shipped. It even supports RAID 0/1/10 functionality out of the box, and you can flash it to also do RAID 5/50. You'll need a couple of cheap adapter cables to use it with regular SATA drives. It might be overkill for your desktop though.

GokieKS
Dec 15, 2012

Mostly Harmless.

Navaash posted:

I would just be happy with more SATA6.0 ports if it weren't for the fact that add-in SATA cards are either absolute garbage, overpriced to hell, or both. (2500K/P67 here)

I don't know what your cutoff for "overpriced to hell" is, but you can get an IBM/Dell/HP/Lenovo/etc rebranded LSI controller for ~$80 if you keep an eye on ebay (I got an IBM M1115 for $65 a little while back), and those can be used as either a RAID controller or reflashed to be a HBA. You'd need to get 2 SAS -> SATA forward breakout cables, but even then it's still only about $100, which I think is pretty reasonable for another 8 SATA ports if you need them.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Unfortunately both of those controllers are for spinning HDDs only and provide the same awful support for SSDs as the dirt-cheap consumer controllers. You're kind of stuck with the onboard Intel controller if you want decent performance with SSDs, and you only really need SATA 600 for SSDs.

SamDabbers
May 26, 2003



Alereon posted:

Unfortunately both of those controllers are for spinning HDDs only and provide the same awful support for SSDs as the dirt-cheap consumer controllers. You're kind of stuck with the onboard Intel controller if you want decent performance with SSDs, and you only really need SATA 600 for SSDs.

Interesting. Apparently it only supports TRIM with SSDs that are on the compatibility list, the majority of which are expensive enterprise drives but it does have Intel 320/510/520 and Samsung 840 Pro on the list, so if you have one of those it should be fine.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

SamDabbers posted:

Interesting. Apparently it only supports TRIM with SSDs that are on the compatibility list, the majority of which are expensive enterprise drives but it does have Intel 320/510/520 and Samsung 840 Pro on the list, so if you have one of those it should be fine.
Where are you seeing that TRIM is supported with any drives?

SamDabbers
May 26, 2003



Alereon posted:

Where are you seeing that TRIM is supported with any drives?

Here. The controller has to be flashed with "IT" firmware, which disables all RAID functions. Also, an anecdote.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

SamDabbers posted:

Here. The controller has to be flashed with "IT" firmware, which disables all RAID functions. Also, an anecdote.
That would explain all the people who couldn't get it to work out of the box even with RAID disabled :downs: Thanks for digging that up! That's a pretty crappy list of supported drives though :(

Alereon fucked around with this message at 04:27 on Feb 23, 2014

Ignoarints
Nov 26, 2010

ShaneB posted:

I have a delidded CLP'ed watercooled 4670K that won't stabilize past 4.5ghz. I haven't jacked the volts crazy but temps on non synthetics like x264 reach mid 60s and synthetics go high 70s so that's about where I'm stopping. It's really luck of the draw more than temp controlling.

What's your voltage at? I was stable for 30 minutes or so at 4.5 but the temperatures kept spiking above 80* and I stopped it. That's at 1.29 volts. From the guides I've read I'm probably going to see what I can get at 1.35 volts before calling it quits. Also I read people having some luck stabilizing by lowering the uncore (assuming it was overclocked along with the multiplier) and increasing vring voltage. My uncore is at 4.2 GHz as it is and from what I understand that's already a bit high. I have some wiggle room, but hopefully the processor can do it at all.

Either way, I'll find out, picked up the CPU Mutilator today for $13



I'm wondering now if I should replace the paste between the water cooler and chip with the CLU stuff as well. From the delidding guides I've read through people are using good paste for that and the CLU for under the heat spreader, I wonder why that is. Is it just obnoxious to use or something?

Ignoarints fucked around with this message at 22:03 on Feb 24, 2014

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

CPU design involves an insane number of engineering tradeoffs, all entangled.

This is an amazing post.

ShaneB
Oct 22, 2002


Ignoarints posted:

What's your voltage at? I was stable for 30 minutes or so at 4.5 but the temperatures kept spiking above 80* and I stopped it. That's at 1.29 volts. From the guides I've read I'm probably going to see what I can get at 1.35 volts before calling it quits. Also I read people having some luck stabilizing by lowering the uncore (assuming it was overclocked along with the multiplier) and increasing vring voltage. My uncore is at 4.2 GHz as it is and from what I understand that's already a bit high. I have some wiggle room, but hopefully the processor can do it at all.

Either way, I'll find out, picked up the CPU Mutilator today for $13



I'm wondering now if I should replace the paste between the water cooler and chip with the CLU stuff as well. From the delidding guides I've read through people are using good paste for that and the CLU for under the heat spreader, I wonder why that is. Is it just obnoxious to use or something?

The temp deltas between CLP/U and something normal like AS5 are something in the 3-5C range, but I chose to go with ease of use/reassembly/etc and just used AS5 between my IHS and H60 watercooler. I used CLP under the IHS. CLP/U require a really annoying process to remove, which isn't a HUGE deal unless you are someone who is removing/reinstalling coolers... but honestly if you needed to do it ONCE, EVER, it wouldn't be worth it IMO. Your heat isn't keeping you from 4.5+, it's almost certainly just your chip. Are you liquid cooling and seeing temps spike above 80? On what, P95 small or IBT?

My voltages are 2.0V input, 1.29Vcore, and 1.25v cache. I have my uncore at 38x. Your uncore CAN restrict overclocking and dropping it a little to get more out of the core is worth it.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Intel has released a driver update that enables Quick Sync transcoding Pentium and Celeron processors with Intel HD graphics. Previously the cut-off was the Core i3, which now makes lower-end processors MUCH more attractive versus AMD's offerings.

Ignoarints
Nov 26, 2010
Apparently they used JB Weld on my chip because the heat spreader isn't budging. I basically destroyed the wood and have resorted to using my wood knife block.

Dunno what to do now.

Edit: Well I used a razor blade after all. I feel like I did a piss poor job of it all, but it turns on.





And some quick results :)

Before:



After:



Edit again:

Some rough overclocking results. I know the vcore is up there but I haven't tweaked anything yet. Also only tested this for 30 minutes or so. But given the temperatures as it is now I might get a stable 4.7 out of this.



Totally worth the $30 and stress

Been doing research into "reasonable" high vcore for these processors and getting mixed results. A lot of people are saying 1.45 is simply too high, but not much more than that. It's certainly believable of course. Then they are often countered with people who say they've been folding at 1.52 volts for 9 months straight (for example), but I also can't take that as proof that it's okay. The only directly reported failures at this vcore I've come across are always associated with much higher temperatures (some +100* lol), so far.

Anybody have opinions on vcore this high? I'm still going to try and tweak it to lower it some but I'm not expecting much.

Ignoarints fucked around with this message at 06:56 on Feb 26, 2014

Hace
Feb 13, 2012

<<Mobius 1, Engage.>>
As far as I know, you want to stay under 1.3V if you want to run a 24/7 stable overclock.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Check out the overclocking megathread over here.

Ignoarints
Nov 26, 2010

Alereon posted:

Check out the overclocking megathread over here.

Thanks, sorry, wasn't even aware of it.

ShaneB posted:

The temp deltas between CLP/U and something normal like AS5 are something in the 3-5C range, but I chose to go with ease of use/reassembly/etc and just used AS5 between my IHS and H60 watercooler. I used CLP under the IHS. CLP/U require a really annoying process to remove, which isn't a HUGE deal unless you are someone who is removing/reinstalling coolers... but honestly if you needed to do it ONCE, EVER, it wouldn't be worth it IMO. Your heat isn't keeping you from 4.5+, it's almost certainly just your chip. Are you liquid cooling and seeing temps spike above 80? On what, P95 small or IBT?

My voltages are 2.0V input, 1.29Vcore, and 1.25v cache. I have my uncore at 38x. Your uncore CAN restrict overclocking and dropping it a little to get more out of the core is worth it.

Just noticed how much higher your cache voltage was than mine. Was that for stability reasons? Maybe I can increase my cache voltage to lower my vcore. Also the 80+ spikes were small EFT, but blend wasn't much better. All in the past now :D. I ended up using TX4 (?) for the water block after seeing how annoying it was for the cpu die.

Ignoarints fucked around with this message at 15:46 on Feb 26, 2014

Ignoarints
Nov 26, 2010
Might be the wrong thread for this, but I have a question about the haswell CPU die (and probably all the others apply)

After delidding I was pretty much mesmerized at how small it was and how many things were in something so tiny (1.4 billion things that do something!). I've seen old CPU die's and wafers from probably 30 years ago and the patterns were relatively huge and obvious. I actually have a box of them at home.



That was my processor delidded, and it was so reflective it might as well have been a hole - if it weren't obviously a reflection, but otherwise a featureless surface. Is this because the actual die pattern in on the underside of the rectangle here, or is there a layer of something on top of it, or is it because the pattern is extremely small now and not really visible to the naked eye anymore?

I'm talking about something like this

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It is mounted upside down. The CPU was built up on top of a silicon wafer, then mounted with the "bottom" flat silicon facing up.

Ignoarints
Nov 26, 2010

Factory Factory posted:

It is mounted upside down. The CPU was built up on top of a silicon wafer, then mounted with the "bottom" flat silicon facing up.

Nice, thanks.

JawnV6
Jul 4, 2004

So hot ...
I'm unfamiliar with the older manufacturing techniques you're talking about, but right there you're looking at silicon. Under a few microns of that is the poly layer with transistors, then the various metal layers, then the pads and the PCB. The term is flip chip.

Die images like that aren't just snapped, they're heavily staged and doctored. Often requiring special equipment.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

JawnV6 posted:

I'm unfamiliar with the older manufacturing techniques you're talking about, but right there you're looking at silicon. Under a few microns of that is the poly layer with transistors, then the various metal layers, then the pads and the PCB. The term is flip chip.

Die images like that aren't just snapped, they're heavily staged and doctored. Often requiring special equipment.

I have some 1 micron Motorola parts, and you can clearly see the patterns on the die. I guess they weren't flipping chips at that point? I bet I could get a good picture of an 80s die with nothing but a macro lens.

Gwaihir
Dec 8, 2009
Hair Elf

JawnV6 posted:

I'm unfamiliar with the older manufacturing techniques you're talking about, but right there you're looking at silicon. Under a few microns of that is the poly layer with transistors, then the various metal layers, then the pads and the PCB. The term is flip chip.

Die images like that aren't just snapped, they're heavily staged and doctored. Often requiring special equipment.


Twerk from Home posted:

I have some 1 micron Motorola parts, and you can clearly see the patterns on the die. I guess they weren't flipping chips at that point? I bet I could get a good picture of an 80s die with nothing but a macro lens.

Funny thing, I used to work for the guy that took a ton of original die shots on older chips. Our ancient rear end website has a pretty good gallery of some of the funny things that used to get slipped in to dies back then: http://micro.magnet.fsu.edu/creatures/index.html

A brief bit about how it was done: http://micro.magnet.fsu.edu/creatures/technical/packaging.html Once CPUs and the like moved to flip chip designs we couldn't easily photograph them anymore, not from the individual CPU level. We had a decent collection of whole wafers to shoot as well, but I don't think we ever got anything newer than a P3.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Gwaihir posted:

Funny thing, I used to work for the guy that took a ton of original die shots on older chips. Our ancient rear end website has a pretty good gallery of some of the funny things that used to get slipped in to dies back then: http://micro.magnet.fsu.edu/creatures/index.html

That is a classic website! I first found it back in the 1990s. Thanks to both of you for it.

Relevant to the question of when flip chip began, I found this die shot of one of the earliest flip-chip parts I remember seeing in the wild, the 0.6 micron PowerPC 601. Do you have any recollection of whether you took that photo yourselves? Because if you did, somebody at IBM or Motorola must've donated you a die that had been balled but not yet soldered to a package.

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer
Maybe not the best thread for this, but I'll shoot.

So I've been given the task of porting our corei7 VxWorks codebase (based around Haswell) to the new Bay Trail atom line. I've not worked with Atom processors yet but I have one sitting at my desk and I'm at the very initial stages of trying to boot VxWorks on it. Any big differences from a software standpoint that I should take into consideration if I'm taking a current corei7 Haswell implementation and trying to run it on Bay Trail? Currently I have Coreboot running a SeaBIOS payload which in turn has the VxWorks bootrom as a payload. I think it's passing off to the VxWorks payload but it hangs at that point so obviously something isn't working.

I'm just trying to get a sense of the big differences I should be taking into account.

Also not having serial driver output that early in the bootrom suuuuuccckkkssss.

Gwaihir
Dec 8, 2009
Hair Elf

BobHoward posted:

That is a classic website! I first found it back in the 1990s. Thanks to both of you for it.

Relevant to the question of when flip chip began, I found this die shot of one of the earliest flip-chip parts I remember seeing in the wild, the 0.6 micron PowerPC 601. Do you have any recollection of whether you took that photo yourselves? Because if you did, somebody at IBM or Motorola must've donated you a die that had been balled but not yet soldered to a package.

Yup, everything on there was taken by us. I'm preeeety sure that the PC still running the microscope that got used for those chip shots is still running windows 98se, maaaybe win2000 at the latest. Old proprietary stuff is the best!

Ignoarints
Nov 26, 2010
Does anybody know if any Windows program I use to view cpu input voltage? I'm not sure my bios is applying the changes to that I'm making. I'm getting strangeness during overclocking and I just noticed that it saves my settings (say I put in 2.0v) but on the left it says what it used to be (1.8v). That's not unusual for any other setting until I restart then it updates that left figure. It never does so for the CPU input voltage (vrin)

Also preferably one that showed cache multiplier as well, since Intel Extreme Tuning tells me that but says its at 40x, when its manually set to 34 in BIOS and turbo is disabled... if it were 40, it could account for some more weirdness

Ignoarints fucked around with this message at 03:17 on Mar 2, 2014

Adbot
ADBOT LOVES YOU

Woodsy Owl
Oct 27, 2004

Alereon posted:

Intel has released a driver update that enables Quick Sync transcoding Pentium and Celeron processors with Intel HD graphics. Previously the cut-off was the Core i3, which now makes lower-end processors MUCH more attractive versus AMD's offerings.

This applies to my G2020! Thanks so much for the heads up man!

edit: I can't get it installed, installation keeps failing for god knows why.

Woodsy Owl fucked around with this message at 15:14 on Mar 2, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply