Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 2, 2006

"Tell me of your home world, Usul"


Anime Schoolgirl posted:

POWER8 has up to 8 threads per core and POWER9 will have up to 12

SPARC T3/T4 have 8 threads per core as well.

The nice thing about this is that Oracle charges for their DB by the core, they don't care how many threads it's got, so that gives you a huge price break on Oracle DB compared to say Xeon processors.

Adbot
ADBOT LOVES YOU

Boiled Water
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER


Imagine the power savings if you had just one core running 9 threads. I mean I imagine this would be the case, but have no grounds for it except fewer cores = less power consumption.

Gwaihir
Dec 8, 2009



Hair Elf

Boiled Water posted:

Imagine the power savings if you had just one core running 9 threads. I mean I imagine this would be the case, but have no grounds for it except fewer cores = less power consumption.

That is uhhhh not at all the way it works rofl.

Anime Schoolgirl
Nov 28, 2002






The reason SMT works is that a single thread often doesn't saturate the workflow of a core by itself, and in some architectures like POWER, it never will, hence the crazy amount of threads.

x86 as it is is very efficient in a straight line, so the third thread won't see much work being put in it if it existed.

Boiled Water
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER


Gwaihir posted:

That is uhhhh not at all the way it works rofl.

That's too bad.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Boiled Water posted:

Imagine the power savings if you had just one core running 9 threads. I mean I imagine this would be the case, but have no grounds for it except fewer cores = less power consumption.
If the CPU core has an overabundance of execution units that can make the performance of each of the 8 SMT threads run even just 30-40% of that of a single thread occupying all of the core, I'd say it can generate quite some heat and eat power.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

Boiled Water posted:

Imagine the power savings if you had just one core running 9 threads. I mean I imagine this would be the case, but have no grounds for it except fewer cores = less power consumption.
The thing is, a "core" can be arbitrarily large since you can have multiple execution units of the same type in a single core and even without SMT, unrelated instructions within the same thread can be executed simultaneously. (That's superscalar architecture, like the post below mentions, and has been with us since the Pentium 1.) SMT allows you to keep the execution units busy even if one thread has instructions very tightly chained together by dependencies or doesn't have many/any instructions for a certain type of execution unit - for example, if one thread is full of integer operations and another floating-point.

However, the configuration of currently available x86-64 cores is such that for most loads they can usually keep their execution units close to optimum possible saturation with just one thread. That is to say that adding a second thread doesn't offer much of a benefit for most loads, especially homogeneous ones as opposed to multitasking, so implicitly the gains of adding more past 2 are even more marginal and to allow something like 8-way SMT would likely not be worth the added complexity.

What I don't know is why SPARC and POWER systems' designers thought many-thread SMT was worth implementing. It could have been any/all of multiple factors:
1) These architectures use greater parallelism in execution units, effectively having "wider" cores that are harder to saturate.
2) These architectures have longer pipelines than typical x86-64 designs (that is, an execution unit takes more clocks to complete an operation but can accomodate a greater number of operations simultaneously in flight at different phases in the unit) which are again harder to saturate.
3) These architectures use more specialized instructions split into a greater variety of execution units, with a smaller portion of the whole likely to be in use by a specific thread at any given time.
4) These architectures are better at finding exploitable parallelism between threads than Intel's (I'm not educated enough on the details of Intel's implementation to know if it has significant shortcomings that could be surpassed)

There are probably others too, I just know about this stuff from undergrad/reading articles and don't actually work on CPU design. Anyway, the point is that if you had a big, complex core that could actually benefit from running 9 threads at once it might (would) use a lot more power than one of the ones we're used to.

Eletriarnation fucked around with this message at 15:35 on Dec 19, 2016

Alzion
Dec 31, 2006
Technically a '06

People completely misunderstanding SMT in this thread

We're not talking about a jump from scalar to superscalar architecture guys. The SMT were talking about just keeps microinstructions from another thread in queue so processing power doesn't sit idle on a point in the pipline if there is a branch prediction error or cache miss. Adding SMT for more than 2 threads on a single core just isn't cost efficient outside a few niche server/supercomputer scenarios. That's why with 2 thread SMT your only looking at a 5-7 % increase in performance in most general situations.

edit: beaten with more detail ^

Gwaihir
Dec 8, 2009



Hair Elf

Eletriarnation posted:

The thing is, a "core" can be arbitrarily large since you can have multiple execution units of the same type in a single core and even without SMT, unrelated instructions within the same thread can be executed simultaneously. SMT allows you to keep the execution units busy even if one thread has instructions very tightly chained together by dependencies or doesn't have many/any instructions for a certain type of execution unit - for example, if one thread is full of integer operations and another floating-point.

However, the configuration of currently available x86-64 cores is such that for most loads they can usually keep their execution units close to optimum possible saturation with just one thread. That is to say that adding a second thread doesn't offer much of a benefit for most loads, especially homogeneous ones as opposed to multitasking, so implicitly the gains of adding more past 2 are even more marginal and to allow something like 8-way SMT would likely not be worth the added complexity.

What I don't know is why SPARC and POWER systems' designers thought many-thread SMT was worth implementing. It could have been any/all of multiple factors:
1) These architectures use greater parallelism in execution units, effectively having "wider" cores that are harder to saturate.
2) These architectures have longer pipelines than typical x86-64 designs (that is, an execution unit takes more clocks to complete an operation but can accomodate a greater number of operations simultaneously in flight at different phases in the unit) which are again harder to saturate.
3) These architectures use more specialized instructions split into a greater variety of execution units, fewer of which is likely to be in use by a specific thread at any given time.
4) These architectures are better at finding exploitable parallelism between threads than Intel's (I'm not educated enough on the details of Intel's implementation to know if it has significant shortcomings that could be surpassed)

There are probably others too, I just know about this stuff from undergrad/reading articles and don't actually work on CPU design. Anyway, the point is that if you had a big, complex core that could actually benefit from running 9 threads at once it might use a lot more power than one of the ones we're used to.

In IBM's case, at least, it's because POWER chips were spawned for the mainframe and midrange business, running machines supporting thousands of users running interactive sessions at once off comparatively few cores.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Anime Schoolgirl posted:

x86 as it is is very efficient in a straight line, so the third thread won't see much work being put in it if it existed.

Well, not really to do with x86. In fact, those Phi cards Intel makes have 4 threads.

Basically, it's just that it trades off against single thread performance. Larger hardware thread counts are good for systems that care mostly about throughput. You can extract every last bit of bang for buck out of the core, at the cost of all the threads running more slowly overall.

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!


crazypenguin posted:

Well, not really to do with x86. In fact, those Phi cards Intel makes have 4 threads.

Basically, it's just that it trades off against single thread performance. Larger hardware thread counts are good for systems that care mostly about throughput. You can extract every last bit of bang for buck out of the core, at the cost of all the threads running more slowly overall.

Like, f.ex., in-order GPUs running 4000 threads on a single die. There was a marketing presentation by NV* comparing the FLOPs/rest-of-it power ratio for a GPU and a OOOE x86 core, IIRC it was around 2 orders of magnitude. The OOOE machinery is complicated, remove it and you go from 2 IPC to 10 CPI, add 20 threads and you're back at 2 IPC with less power overall (numbers pulled out of my rear end).

*: not a fair comparison, the target workloads are quite different.

silence_kit
Jul 14, 2011


I have only a rudimentary knowledge in computer architecture. What is the important distinction between a core and an execution unit? Also, what exactly is Intel's Hyper-Threading technology? In my mind, I thought it was some kind of abstraction which allowed for a hardware notion of a thread which allowed for faster context switching, but I might be wrong there.


There's already an abbreviation for this: e.g.

ehnus
Apr 15, 2003

Now you're thinking with portals!

An execution unit is a block of silicon that does the processing. Typically these include arithmetic units, floating point units, and load/store units. A "core" is a combination of some number of execution units, register files, and cache. Register files are where the intermediate computation forms are stored, sort of like memory but filled with the data used by the execution units.

Hyperthreaded CPUs have separate register files per-thread but share execution units. Benefits to hyper threading come in situations where the execution units would normally lie in wait. For example, if the code is waiting on data to make it into the registers from the cache, or into the cache from other tiers of cache (or main memory), they just sit there twiddling their thumbs until they can work again. If you can have another computation queued up in another thread you can keep the execution unit packed.

Systems with many hyperthreads (POWER8/9, GPUs/Xeon Phi, etc.) are useful in cases where computation is frequently blocked on waiting for data. For a database server that spends a lot of time waiting for pages to be brought off of media and into memory, or from memory into cache, you can get great efficiency increases when you have 8 threads sharing a common set of execution units.

silence_kit
Jul 14, 2011


ehnus posted:

An execution unit is a block of silicon that does the processing. Typically these include arithmetic units, floating point units, and load/store units. A "core" is a combination of some number of execution units, register files, and cache. Register files are where the intermediate computation forms are stored, sort of like memory but filled with the data used by the execution units.

Thanks, that helps a lot.

ehnus posted:

Hyperthreaded CPUs have separate register files per-thread but share execution units. Benefits to hyper threading come in situations where the execution units would normally lie in wait. For example, if the code is waiting on data to make it into the registers from the cache, or into the cache from other tiers of cache (or main memory), they just sit there twiddling their thumbs until they can work again. If you can have another computation queued up in another thread you can keep the execution unit packed.

OK, so my earlier conception of hyper-threading was very wrong. Hyper-threading presents multiple independent instruction streams (might be using that term incorrectly) to the programmer, but these instruction streams under the hood share computation circuits or 'execution units' and under certain workloads, will have to wait for their turn to use the integer addition circuit, for example.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

To the outside, one physical core is represented as two (or more) logical cores, to which the operating system schedules regular threads to. Ideally, the OS has some awareness and keep the processing load to the first logical core per physical first before spreading the load to the secondary ones. That's one reason why for instance in Windows' task manager you'll see the odd cores under load and the even ones not, when using an Intel CPU with Hyperthreading.

--edit:
Speaking of which, I'm curious whether Intel offers some control to the OS to tell the CPU cores to prioritize with logical one, so that the scheduler can act accordingly. Say schedule higher priority threads on logical core 0 and background threads on logical core 1, and tell the CPU to prioritize core 0 at all cost, or even everything out if two important threads are running on a physical core.

Combat Pretzel fucked around with this message at 21:04 on Dec 19, 2016

EdEddnEddy
Apr 5, 2012





Combat Pretzel posted:

To the outside, one physical core is represented as two (or more) logical cores, to which the operating system schedules regular threads to. Ideally, the OS has some awareness and keep the processing load to the first logical core per physical first before spreading the load to the secondary ones. That's one reason why for instance in Windows' task manager you'll see the odd cores under load and the even ones not, when using an Intel CPU with Hyperthreading.

--edit:
Speaking of which, I'm curious whether Intel offers some control to the OS to tell the CPU cores to prioritize with logical one, so that the scheduler can act accordingly. Say schedule higher priority threads on logical core 0 and background threads on logical core 1, and tell the CPU to prioritize core 0 at all cost, or even everything out if two important threads are running on a physical core.

Since Windows 8 I believe it has. It was really noticeable coming from 7 on my i7 3930K and I remember AMD's Bulldozer got a ~10% boost from this as the OS knew which cores were part of the same "Module" and split things off accordingly.

ehnus
Apr 15, 2003

Now you're thinking with portals!

silence_kit posted:

OK, so my earlier conception of hyper-threading was very wrong. Hyper-threading presents multiple independent instruction streams (might be using that term incorrectly) to the programmer, but these instruction streams under the hood share computation circuits or 'execution units' and under certain workloads, will have to wait for their turn to use the integer addition circuit, for example.

Yeah, pretty much. The "independent instruction streams" usually manifest as operating system threads.

The speedup can be so variable because if you have, say, two integer heavy workloads sharing the threads they'll just be waiting for the execution units to free up and you won't see any (significant) gains.

If you can pair, say, a floating point heavy workload on one thread with an integer heavy workloads on the other you'll see much higher throughput. Likely not 2x though as they'll be bottlenecked on bits you just can't get away from, like memory load/store.

Paul MaudDib
May 2, 2006

"Tell me of your home world, Usul"


Eletriarnation posted:

What I don't know is why SPARC and POWER systems' designers thought many-thread SMT was worth implementing. It could have been any/all of multiple factors:
1) These architectures use greater parallelism in execution units, effectively having "wider" cores that are harder to saturate.
2) These architectures have longer pipelines than typical x86-64 designs (that is, an execution unit takes more clocks to complete an operation but can accomodate a greater number of operations simultaneously in flight at different phases in the unit) which are again harder to saturate.
3) These architectures use more specialized instructions split into a greater variety of execution units, with a smaller portion of the whole likely to be in use by a specific thread at any given time.
4) These architectures are better at finding exploitable parallelism between threads than Intel's (I'm not educated enough on the details of Intel's implementation to know if it has significant shortcomings that could be surpassed)

All of the above.

It used to be called a "barrel processor". Basically the architectural idea is a big core with lots of execution units, combined with a longer pipeline. To keep this architecture saturated, you then have a whole bunch of threads running so you can "hide" latency/cache misses/pipeline stalls by just switching to another thread that's ready to execute.

They then combine that with hand-tuned software that exposes lots of parallelism to the processor. Being real honest here, you don't buy an UltraSparc or Sparc T3/T4 unless you are already deep into the Oracle ecosystem, and they actually are pretty great hardware for running Oracle DB (if not the best around - hand optimization still works). Again, particularly since Oracle's pricing model is based on cores not threads - so you pay 4 times as much for an equivalent amount of HyperThread Xeons. I think it runs about $25k per core.

There are some pretty big downsides though, Sun/Oracle call it "throughput-oriented", which is putting a positive spin on an architecture that's poorly suited to latency-sensitive tasks and has what I'll be nice and refer to as "enterprise-grade pricing". It's good for power consumption though, and in combination with the throughput it's not bad for HPC type applications that are sensitive to power costs and don't really care about latency. But given the price, they have not done that well against commodity hardware. IBM has done pretty well though, quite a few of the Green 500 Supercomputer list are using BlueGene/Q, which is POWER based. At one point they made up quite a lot of the top of the list, but they've been pushed off recently.

Paul MaudDib fucked around with this message at 23:23 on Dec 19, 2016

Paul MaudDib
May 2, 2006

"Tell me of your home world, Usul"


ehnus posted:

Yeah, pretty much. The "independent instruction streams" usually manifest as operating system threads.

The speedup can be so variable because if you have, say, two integer heavy workloads sharing the threads they'll just be waiting for the execution units to free up and you won't see any (significant) gains.

If you can pair, say, a floating point heavy workload on one thread with an integer heavy workloads on the other you'll see much higher throughput. Likely not 2x though as they'll be bottlenecked on bits you just can't get away from, like memory load/store.

It really depends though, because there's not necessarily just one integer unit in the processor. Side note: in fact, even Intel's x86 processors have at least two integer execution units (Skylake has three) per core, and two or three floating-point units per core.

The problem is that you can't necessarily keep all of those cores busy at once, there isn't necessarily enough instruction-level parallelism from a single thread to cover that. Sooner or later you will get a pipeline stall from a conditional branch or something. The point of hyperthreading (or barrel processing) is that you then have some other thread you can switch to in order to keep those units busy.

So it's similar-ish to hyperthreading, but unlike hyperthreading there are never two threads dispatching at once. The processor fully switches context to a new thread. What SPARC is doing is basically presenting 32/64/etc "virtual cores" and then scheduling them onto physical cores when they're in a ready state.

http://www.solarisinternals.com/wik...CMT_Utilization

This may have changed somewhat on the more recent T3/T4 - I think those were moderately different, but I'm not sure of the exact difference.

Gwaihir
Dec 8, 2009



Hair Elf

sincx posted:

Sandy Bridge really was special. Sandy Bridge was the last generation of mainstream Intel chips that had a soldered heatspreader.

Ihmemies posted:

Hopefully AMD will do it properly and forces intel to compete.

Late reply to these posts but I remember seeing this article posted a while ago and then bam it's relevant again:

http://overclocking.guide/the-truth...-cpu-soldering/

There's a ton of information in there, but one of the takeaways is that the occurrence of thermal voids and micro-cracking in the solder increase dramatically over time (due to thermal cycling, aka turning the PC on/off) as the area being soldered decreases. (Intel still solders Xeons with significantly larger dies)

Depending on how large a die Zen is, it might not end up with solder.

On the plus side, going by that guy's results, soldering his Skylake chip didn't end up with dramatically better results than de-lidding and using a good liquid metal TIM. So it's not like we're never getting solder levels of performance again.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Gwaihir posted:

On the plus side, going by that guy's results, soldering his Skylake chip didn't end up with dramatically better results than de-lidding and using a good liquid metal TIM. So it's not like we're never getting solder levels of performance again.

No, but there's certainly an extra layer of hassle and danger (both perceived and real) in de-lidding + remounting versus just a durable soldered package. Not saying there aren't legit reasons for it, but it is an annoyance.

Gwaihir
Dec 8, 2009



Hair Elf

For sure.

I'm not sure which of the CPU threads it was in, but someone linked a full kit that makes it foolproof, and with that existing, I'd do it myself, if I was building a new machine, but I feel no urge to go back and take my existing computer apart.

There's just nothing I do on my home machine that makes gaining an extra couple hundred mhz and or dropping my load temps by 10 degrees worth tearing everything down and putting it back together.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!


karoshi posted:

Like, f.ex., in-order GPUs running 4000 threads on a single die. There was a marketing presentation by NV* comparing the FLOPs/rest-of-it power ratio for a GPU and a OOOE x86 core, IIRC it was around 2 orders of magnitude. The OOOE machinery is complicated, remove it and you go from 2 IPC to 10 CPI, add 20 threads and you're back at 2 IPC with less power overall (numbers pulled out of my rear end).

*: not a fair comparison, the target workloads are quite different.

Also important in making that high FLOPS/W figure possible is that GPUs and other throughput optimized processors have a lot less cache per core than traditional CPUs which makes a big difference because big caches use a ton of power and area. Big caches are needed to run one thread really quickly but if that's not important you can make a lot more efficient use of the available die and power budget.

MaxxBot fucked around with this message at 16:43 on Dec 20, 2016

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


Paul MaudDib posted:

I'm pretty certain that the vast, vast majority of i5s have fully-functional hyperthreading units onboard, i.e. they are just being locked down for market segmentation. Is making people buy a whole new processor supposed to be better somehow?


Aww, so pwwecious, baby's first encounter with capacity-on-demand systems.

http://www-03.ibm.com/systems/power.../offerings.html

http://www.fujitsu.com/global/produ...-on-demand.html

https://docs.oracle.com/cd/E19855-01/E21467-01/cod.html

Just think, enterprise hardware companies will stick whole processors and memory in your system that you aren't allowed to use until you buy a license! Scandal!

Not surprised by it at all, but it doesn't stop it being a cynical marketing tactic. It's not like cutting off parts of a system behind a licence excites the engineers that design those machines.

It's not a scandal at all, it's just something that that market segment will bear. If you want to make more money, you definitely want to find the limits of what your target market will spend.

HalloKitty fucked around with this message at 17:01 on Dec 20, 2016

SpelledBackwards
Jan 7, 2001

I found this image on the Internet, perhaps you've heard of it? It's been around for a while I hear.



silence_kit posted:

There's already an abbreviation for this: e.g.

To pile on, it's not good form to show a speed decrease by changing the units from 2 IPC (instructions per clock) to 10 CPI (clocks per instruction) unless you're just changing the prefix like from kilometers to meters. Even then you're opening the door for confusion, and in this CPU case, inverting the unit is pretty bad. It would have been much better to say from 2 IPC down to 0.1 IPC.

silence_kit
Jul 14, 2011


HalloKitty posted:

Not surprised by it at all, but it doesn't stop it being a cynical marketing tactic. It's not like cutting off parts of a system behind a licence excites the engineers that design those machines.

Again, if you have a problem with how the computer chip industry creates product lines by manufacturing one product and disabling varying amounts of functionality, then you must have major issues with the software industry. 'Why is my free/lower cost software copy artificially devoid of features? The additional marginal production cost to let me download/authenticate the program with all the features is zero!'

PerrineClostermann
Dec 15, 2012

by FactsAreUseless


If you claim you don't see the difference then you're being disingenuous.

silence_kit
Jul 14, 2011


What is the difference? I honestly don't see it.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless


Hardware is a physical object. Software is numbers and configuration.

silence_kit
Jul 14, 2011


PerrineClostermann posted:

Hardware is a physical object. Software is numbers and configuration.

Why is this distinction important?

Why is it okay in your mind for software companies to disable functionality which could be added for free to their lower trim levels when it is not ok for computer chip companies to do so?

silence_kit fucked around with this message at 17:38 on Dec 20, 2016

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

PerrineClostermann posted:

Hardware is a physical object. Software is numbers and configuration.

Software is numbers and configuration that tells your physical object what to do. There's really no conceptual difference between locking off some of those numbers (the end result being that your physical object is limited in what it can do) vs locking off some parts of the physical object (the end result being that your physical object is limited in what it can do).

In both cases you are artificially limited in what your physical object can do by people who want you to pay more money to enjoy the full experience.

It's not like selling you legitimately partially-defective or physically differentiated parts would be any cheaper than the current situation.

silence_kit
Jul 14, 2011


Designing a computer chip is mostly 'numbers and configuration'. The per-part raw material cost and even the per-part production cost is shockingly low if you amortize the production set-up cost over many units. I don't know the exact number, but Intel is charging you $200 for something that costs $1-10 for them to produce.

evilweasel
Aug 24, 2002



PerrineClostermann posted:

Hardware is a physical object. Software is numbers and configuration.

This is not a meaningful distinction in any real sense here. You might as well be commenting on the color.

GRINDCORE MEGGIDO
Feb 28, 1985




Any posters in here delidded their Skylake chips?

PerrineClostermann
Dec 15, 2012

by FactsAreUseless


Let me download this processor real quick

fishmech
Jul 16, 2006

by VideoGames


Salad Prong

PerrineClostermann posted:

Let me download this processor real quick

You can literally download improved functionality in the form of new microcode in regular CPUs though, to say nothing of what you can get up to with FPGA stuff.

evilweasel
Aug 24, 2002



PerrineClostermann posted:

Let me download this processor real quick

its silver. SILVER. SILVER how can you plebs not see this is important for

uh

*crickets*

NihilismNow
Aug 31, 2003


Microsoft limiting me to 32 GB of ram in some windows versions unless i give them money is reasonable and good.
Intel limiting me to 2mb cache unless i give them money is a loving outrage and this will not stand.

E: Car manufacturers also do this all the time (a PHYSICAL PRODUCT!), a lot of the functionality for options is already in the car they just add the switch or turn the option on in firmware.

NihilismNow fucked around with this message at 18:21 on Dec 20, 2016

EdEddnEddy
Apr 5, 2012





Gwaihir posted:

For sure.

I'm not sure which of the CPU threads it was in, but someone linked a full kit that makes it foolproof, and with that existing, I'd do it myself, if I was building a new machine, but I feel no urge to go back and take my existing computer apart.

There's just nothing I do on my home machine that makes gaining an extra couple hundred mhz and or dropping my load temps by 10 degrees worth tearing everything down and putting it back together.

Also of note, the -E Chips all remained soldered as the dies were bigger (not sure about the Quad Core parts, but the 6+ cores were)

Also with how long the newer CPU's have gotten with the addition of the iGPU you would think the die would again be big enough for solder to be used again.


What the main bummer is, is why is Intel's TIM working as if they are using some cheap bubble gum crap instead of the high quality TIM similar to even the stuff they have already pasted on their box coolers? That stuff on first application was on par with Arctic Silver or other high quality TIM so to loose 30C of efficiency just be delidding, just leads me to think it was a bad part (early run with a bit of a gap between the die/heatspreader, or some other weird defect (Too much/thick glue for the heatspreader base or something).

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

NihilismNow posted:

E: Car manufacturers also do this all the time (a PHYSICAL PRODUCT!), a lot of the functionality for options is already in the car they just add the switch or turn the option on in firmware.

Tesla is really doubling down on this, with the S60 having a 75kWh battery, but the "stock" version being limited to only 60kWh unless you pay to unlock it. Same with the autopilot features: all the sensors are built in, but disabled until you pay to unlock them. Literally an over-the-air upgrade for them.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply