Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
forbidden dialectics
Jul 26, 2005





If you already have a vice, you don't need the delidder. Just jam the headspreader in the vice, aim a block of wood at the PCB, and whack it with a hammer. The processor will go shooting across the room and PRESTO you're all done.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Platystemon posted:

Intel's new Minamata Bay architecture.

:pusheen:

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

ConanTheLibrarian posted:

One of the related vids is the same tool made with poo poo materials failing. They don't show the aftermath but the plastic of the tool gives before the lid pops.

You could just wrap it in a bit of cloth to spare the plastic.

New Zealand can eat me
Aug 29, 2008

:matters:


The settings of the printer itself and the PLA used have more to do with the strength than anything else. The OP for that tool recommends 3 perimeters and 30% infill, but neglected to denote a fill type. (Layer thickness appears to be set to 100mm)

If you really needed it to delid like 100 processors or something insane, I would probably use 4 perimeters, at LEAST 60% infill, in a diagonal fill type (typically the strongest)

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

Heaps of Sheeps posted:

If you already have a vice, you don't need the delidder. Just jam the headspreader in the vice, aim a block of wood at the PCB, and whack it with a hammer. The processor will go shooting across the room and PRESTO you're all done.

Don't do this with a modern Intel processor, the PCB is much thinner.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

silence_kit posted:

No way, material cost per wafer has to be O($100). I happen to know that plain 100mm electronics grade silicon wafers in small volumes are ~$20 each (this is actually incredibly amazing by the way--electronics grade silicon may be the purest material known to man, and yet it is so cheap). Intel, although they are buying much bigger wafers, probably are able to get a much better price/area than I could.


100mm wafers might as well be free. The problem is you get a compounding cost increase with every step in size. As of 2014, 300mm were running $400 and 450s were looking at $6-800. It's hard to make a perfect 300 or 450mm ingot, and they're hard to cut perfectly, which means thicker slices and more post-processing to polish down perfectly. The cost per square inch jumps up about 50% per step, with a big spike when a size is new that settles down to the 1.5^generation curve.

That's just the silicon. I'd consider all the chemical processes involved in production to be material costs as well, since there's some amount of loss on each step.

silence_kit
Jul 14, 2011

by the sex ghost

Harik posted:

100mm wafers might as well be free. The problem is you get a compounding cost increase with every step in size. As of 2014, 300mm were running $400 and 450s were looking at $6-800. It's hard to make a perfect 300 or 450mm ingot, and they're hard to cut perfectly, which means thicker slices and more post-processing to polish down perfectly. The cost per square inch jumps up about 50% per step, with a big spike when a size is new that settles down to the 1.5^generation curve.

That's just the silicon. I'd consider all the chemical processes involved in production to be material costs as well, since there's some amount of loss on each step.

I just priced 300mm plain Si wafers on a website which sells wafers for electronics to scientists and it is $80 per wafer for a box of 25. Where are you getting the $400 number?

Maybe your $400 number includes the epitaxy cost. That sounds high to me though. People have told me that epitaxy is expensive, and I understand why it is expensive if you order something custom as a one-off, but no one has ever explained to me why it has to be expensive in volume.

silence_kit fucked around with this message at 03:56 on Dec 23, 2016

Lowclock
Oct 26, 2005
Is there any reason why you can't just remove the IHS with a razor anymore? It was incredibly easy on my old 3570k.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
The razor method is the only classical method left, according to my five minutes research.

I have no experience with delidding so take that with some salt.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

silence_kit posted:

I just priced 300mm plain Si wafers on a website which sells wafers for electronics to scientists and it is $80 per wafer for a box of 25. Where are you getting the $400 number?

Maybe your $400 number includes the epitaxy cost. That sounds high to me though. People have told me that epitaxy is expensive, and I understand why it is expensive if you order something custom as a one-off, but no one has ever explained to me why it has to be expensive in volume.

Then I misread this as being wafer costs when they were talking about total costs of processing. The materials cost is a lot lower than I thought then.

So complete processing does cost more per square inch as you go to larger wafers. I'll try to find a better source on materials costs, since the majority of total cost will be capital and this whole discussion was about "aside from capital, what are the costs."

Harik fucked around with this message at 06:02 on Dec 23, 2016

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Material costs depend on the product node, and the bargaining skills of the company. Like I said, consumables can take up to a grand a wafer and down to a hundred or less, but that includes spare parts along with the raw material. Granted, the spare parts are quite a bit of it as they degrade often, and there needs to be a constant flow of new pumps, quartz rings, robot parts, and a whole host of other things to keep things alive pretty much 24/7. I would hazard a guess its probably a majority of it, though. However, of the raw materials the Si makes up over half.

Also, those wafer prices have no bearing on industry costs. Ive seen the wafers and work done in uni clean rooms and theres really no comparison. The reason those are so cheap is most likely because they arent pure, relatively speaking. Remember, with sub 20-nm real gate lengths and low metal layer pitches coming over the next few years, a single impurity atom can gently caress an entire die. The wafers places like intel tsmc samsung etc use by necessity need to have way way upwards of just 99.9% pure silicon, unless they need something predoped. Even then, it would be way way upwards of just 99.9% pure Si and Ar or B or P or whatever they use.

New Zealand can eat me
Aug 29, 2008

:matters:


silence_kit posted:

People have told me that epitaxy is expensive, and I understand why it is expensive if you order something custom as a one-off, but no one has ever explained to me why it has to be expensive in volume.

I would also like to know the answer to this!

My best guess is that it's a time issue, the equipment seems really expensive so having to do a large volume means a long queue and space being taken up while materials wait around?

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Material costs depend on the product node, and the bargaining skills of the company.

Yeah, if anything, I'd think that they'd be able to get a better price than me buying a box of wafers from some web online store.

Watermelon Daiquiri posted:

However, of the raw materials the Si makes up over half.

I don't think the wafer is that expensive, but in the number are you including the silane for the various deposition steps most of which doesn't get incorporated into the wafer and goes straight into the exhaust of the CVD equipment?

Watermelon Daiquiri posted:

Also, those wafer prices have no bearing on industry costs. Ive seen the wafers and work done in uni clean rooms and theres really no comparison. The reason those are so cheap is most likely because they arent pure, relatively speaking. Remember, with sub 20-nm real gate lengths and low metal layer pitches coming over the next few years, a single impurity atom can gently caress an entire die. The wafers places like intel tsmc samsung etc use by necessity need to have way way upwards of just 99.9% pure silicon, unless they need something predoped. Even then, it would be way way upwards of just 99.9% pure Si and Ar or B or P or whatever they use.

No, those wafers I priced were electronics grade, and were pretty high resistivity. University researchers obviously can't afford to implement and don't need great manufacturing uniformity for the science experiments that they do, but if they are making silicon transistor devices, they do need high purity silicon.

New Zealand can eat me posted:

I would also like to know the answer to this!

My best guess is that it's a time issue, the equipment seems really expensive so having to do a large volume means a long queue and space being taken up while materials wait around?

Almost all mass-production manufacturing equipment is expensive to buy and expensive to run, but if the throughput is high enough the purchase and operation costs can be very low per manufactured unit. This kind of strategy only works if you can manufacture and sell many units. Computer chips are probably most popular type of integrated circuit and they are produced and sold in large quantities.

In the CVD epitaxy reactors, they are batch processing many wafers at a time. It's just a question of what the process time is and whether it actually is a problem.

silence_kit fucked around with this message at 17:43 on Dec 23, 2016

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Lowclock posted:

Is there any reason why you can't just remove the IHS with a razor anymore? It was incredibly easy on my old 3570k.

Fear of nicking a trace and losing the ability to use half of your ram or worse.

AEMINAL
May 22, 2015

barf barf i am a dog, barf on your carpet, barf
Was looking at a wiki page for fire extinguishers today and came across this 3M liquid that won't fry electronics.

People naturally use it to cool PCs with a bare die lmao

There are videos on YouTube, looks drat cool. You can see the CPU evaporate the liquid as bubbles when it's hot

Toast Museum
Dec 3, 2005

30% Iron Chef

AEMINAL posted:

Was looking at a wiki page for fire extinguishers today and came across this 3M liquid that won't fry electronics.

People naturally use it to cool PCs with a bare die lmao

There are videos on YouTube, looks drat cool. You can see the CPU evaporate the liquid as bubbles when it's hot

Looks like it's not just hobbyists using it, either:

https://www.youtube.com/watch?v=a6ErbZtpL88

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Yeah, the numbers i give are based off of monthly consumables figures vs monthly wafer shipments for a major wafer fab. I mentioned waste in this thread before I know. And yeah, if a researcher in short channel stuff or stuff that affects it needs a wafer they are going to need the best stuff for the same reaspons, but most of them who are interested in other things like process steps dont need anything as good . But, tbh, im not interested in playing the one-upsmanship game here that I sense.

New Zealand can eat me
Aug 29, 2008

:matters:


silence_kit posted:

Almost all mass-production manufacturing equipment is expensive to buy and expensive to run, but if the throughput is high enough the purchase and operation costs can be very low per manufactured unit. This kind of strategy only works if you can manufacture and sell many units. Computer chips are probably most popular type of integrated circuit and they are produced and sold in large quantities.

In the CVD epitaxy reactors, they are batch processing many wafers at a time. It's just a question of what the process time is and whether it actually is a problem.

I should have been more clear, I was implying that the equipment specific to this process is so complicated to produce that they maybe make less than 10,000 or even 1,000 per year total. Kind of like those top end mills, when Apple needed more of them, they had to in effect subsidize new factories for both of those companies so that they could produce more of the equipment they needed.

Digging around, I counted maybe 20 or 21 companies that produce this kind of equipment and have also won quality/consistency awards from Intel. If I could figure out how much the machines they sell cost, I might be able to figure out how many units they are selling based on the earnings reports, but I'm really just grasping at straws and attempting to do a really lazy Asymco style impression

EdEddnEddy
Apr 5, 2012



Looking into 3D printing since I have a Tiko coming sometime next month (the boat is in LA, just waiting for the shipment to go from there to me now) and it looks like most all modernish 3D printers can print in some form of ABS, PLA, and even Wood and Metalic (like 40/60 metal/plastic) as long as they have the correct settings. So making a delider out of ABS should make one strong enough to survive more than a single delidding I would Imagine. Can't wait to get my dang printer so I can spend like a week(month) dialing in the settings.


Also that Immersion cooling looks bad rear end, however I don't see it being very quiet since its going to be making boiling bubbles non stop. Might be cool as a sort of ambient water fixture sound, but nowhere near quiet like a closed look cooler can be or even good Air cooling.

EdEddnEddy fucked around with this message at 23:41 on Dec 23, 2016

vivisecting
Dec 13, 2012

it's been 15 years but im still upset that yamato became an astronaut and yet absolutely no one joined the federation since thats actually more plausible than that ending
Can someone tell me which thing I need to download in order to make my ethernet work? In the device manager it says I need to update my driver for the ethernet controller. This is my motherboard: http://www.asrock.com/mb/Intel/Z97E-ITXac/?cat=Download&os=Win8a64

I've survived two years using just wifi, but now I really want to use my Steam Link for more than just Netflix.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

vivisecting posted:

Can someone tell me which thing I need to download in order to make my ethernet work? In the device manager it says I need to update my driver for the ethernet controller. This is my motherboard: http://www.asrock.com/mb/Intel/Z97E-ITXac/?cat=Download&os=Win8a64

I've survived two years using just wifi, but now I really want to use my Steam Link for more than just Netflix.

I would assume the one listed as Lan Driver.

vivisecting
Dec 13, 2012

it's been 15 years but im still upset that yamato became an astronaut and yet absolutely no one joined the federation since thats actually more plausible than that ending

Lowen SoDium posted:

I would assume the one listed as Lan Driver.

That's what I guessed. But I didn't want to just download a bunch of poo poo in case it, like. Messed something up? :shrug:

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


If you're on Windows 10 you could probably just try updating from Windows Update - through its Update Drivers wizard in Device Manager, not in Settings.

William Bear
Oct 26, 2012

"That's what they all say!"
Excuse me if this sounds like a dumb question, I'm not that knowlegeable about CPUs.

I have an Intel i5 CPU with an advertised clock speed of 3.0 Ghz. Recently, I ran an Intel benchmarking utility and it returned a clock speed of 3.29 Ghz, a non-negligible difference. Why is that?

It's not overclocked, to my knowledge the CPU isn't even overclockable. My only guess is that 3.29 isn't a sustainable speed, since my computer was getting pretty loud and hot. I still found it odd.

Platystemon
Feb 13, 2012

BREADS
Intel Turbo Boost

When only certain parts of the CPU are in use, the the computer takes advantage of the decreased power/cooling requirements to increase the clock speed of those parts.

Kazinsal
Dec 13, 2011



With an overclocking board and CPU combo you can also set Turbo Boost up to run on all cores at once (instead of only on a certain number of cores at once), or to run all the time on all cores, or to disable all dynamic clock lowering and run at full tilt 24/7.

William Bear
Oct 26, 2012

"That's what they all say!"
Ah, thanks, I'm doing some reading on it now. I can't believe I never realized I had this.

In your experiences, how long does Turbo Boost last before conditions deactivate it? Would it be suitable for gaming in spurts?

Kazinsal
Dec 13, 2011



As long as you're not hitting some ridiculous temperature (like, 80+ C) it's going to maintain itself indefinitely.

Rastor
Jun 2, 2001

Stupid rumor: Intel is considering no longer guaranteeing x86 backward compatibility, sometime in the 2019-2020 time frame.

Platystemon
Feb 13, 2012

BREADS

William Bear posted:

Ah, thanks, I'm doing some reading on it now. I can't believe I never realized I had this.

In your experiences, how long does Turbo Boost last before conditions deactivate it? Would it be suitable for gaming in spurts?

Your CPU (and cooler) is small compared to the tens of watts of power it uses. It takes a negligible amount of time to heat up under load.

Don’t think of it in terms of time. How much Turbo Boost helps depends on how the game is programmed.

With only one core in use, you’ll get boosted speed for as long as you want. If the game is using all the cores, assume Turbo Boost won’t do much for you, even during the first minute.

Platystemon
Feb 13, 2012

BREADS

Rastor posted:

Stupid rumor: Intel is considering no longer guaranteeing x86 backward compatibility, sometime in the 2019-2020 time frame.

If it were true, a lot of people would need to know about it well in advance.

If Intel breaks compatibility, you won’t hear about it as a rumour. It will be publicly announced.

The only people who know about it before then will be working for Intel under non‐disclosure agreements.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Platystemon posted:

If it were true, a lot of people would need to know about it well in advance.

If Intel breaks compatibility, you won’t hear about it as a rumour. It will be publicly announced.

The only people who know about it before then will be working for Intel under non‐disclosure agreements.

I think the article is writing things to try to hype up their cachet, that aren't really true or even probable. The SIMD stuff at the end is probably the entire movement on Intels part, removing the obsolete SIMD stuff like MMX or the first few SIMD generations would be fine, as even under emulation, newer CPU will outperform the older ones with the dedicated hardware. I could also see a high density targeted CPU design that does away with FP emulation entirely, along with more aggressively dropping other features, in a push for markets that don't use them, like high performance computing or storage computing. Desktop computing and general purpose server computing won't see anything so major.

Dropping legacy x86 stuff would be weird, because Intel already emulates all that. Modern x86 CPU's use a totally non-x86 compatible architecture for execution, and have a decoder that takes each x86 op and produces one (more or less*) micro-op's that then pass through the execution stage(s). The resulting output is then put back into 'x86' so the program can find the result it expects in the way it expects. It's this type of system that provides all the 'lift' needed for branch prediction, SMT, pre-loading the cache, etc. x86-64 is treated the same way, just with a different decoder to produce the micro-op's. There is no reason to drop anything, because none of it really exists anyway.

*Most x86 instructions produce one or two micro-ops, but some instructions so commonly occur together that these 'sets' of x86 instructions only produce a single micro-op.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

EoRaptor posted:

I think the article is writing things to try to hype up their cachet, that aren't really true or even probable. The SIMD stuff at the end is probably the entire movement on Intels part, removing the obsolete SIMD stuff like MMX or the first few SIMD generations would be fine, as even under emulation, newer CPU will outperform the older ones with the dedicated hardware. I could also see a high density targeted CPU design that does away with FP emulation entirely, along with more aggressively dropping other features, in a push for markets that don't use them, like high performance computing or storage computing. Desktop computing and general purpose server computing won't see anything so major.

Dropping legacy x86 stuff would be weird, because Intel already emulates all that. Modern x86 CPU's use a totally non-x86 compatible architecture for execution, and have a decoder that takes each x86 op and produces one (more or less*) micro-op's that then pass through the execution stage(s). The resulting output is then put back into 'x86' so the program can find the result it expects in the way it expects. It's this type of system that provides all the 'lift' needed for branch prediction, SMT, pre-loading the cache, etc. x86-64 is treated the same way, just with a different decoder to produce the micro-op's. There is no reason to drop anything, because none of it really exists anyway.

*Most x86 instructions produce one or two micro-ops, but some instructions so commonly occur together that these 'sets' of x86 instructions only produce a single micro-op.

See, that's what I thought modern x86 processors did. I've taken a class on x86 asm and we even assembled a custom board and programmed 8086 processors with some old ROM chips and whatnot. I looked up how all that worked on future stuff and it seemed like the explanations were essentially that x86 isn't really what's going on under the hood. It's all an abstracted layer on top of the actual workings of the processor.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Platystemon posted:

If it were true, a lot of people would need to know about it well in advance.

If Intel breaks compatibility, you won’t hear about it as a rumour. It will be publicly announced.

The only people who know about it before then will be working for Intel under non‐disclosure agreements.

Yeah for real

This sounds like typical wccf half baked rumors. Maybe they heard about microops or perhaps it's a new arch for iot?

Platystemon
Feb 13, 2012

BREADS

EoRaptor posted:

I think the article is writing things to try to hype up their cachet, that aren't really true or even probable.

That, too. Intel isn’t going to do it because there’s no good reason to.

The die area wasted on legacy cruft is small and it shrinks every generation.

But even if there were some compelling reason we don’t know about, it still doesn’t pass the sniff test for WCCF to break the story.

GRINDCORE MEGGIDO
Feb 28, 1985


But they believe the rumour to be true because Skylake does not have FIVR :v: (seriously wtf).

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Rastor posted:

Stupid rumor: Intel is considering no longer guaranteeing x86 backward compatibility, sometime in the 2019-2020 time frame.

Seems like bollocks, can't imagine too many resources are committed on-die to x86 instructions, also, it would leave AMD in a seriously good position, and Intel wouldn't allow that.

No Gravitas
Jun 12, 2013

by FactsAreUseless

HalloKitty posted:

Seems like bollocks, can't imagine too many resources are committed on-die to x86 instructions, also, it would leave AMD in a seriously good position, and Intel wouldn't allow that.

I'm not so sure about that. I don't think it is about the on-die resources, though that would be a possible minor side-benefit.

Current machine code has a ton of prefixes that use a lot of space. If you completely discard the x86 binary compatibility, you would be able to have a more compact or regular encoding, leading to better instruction caching. The prefixes also slow down instruction decoding, when there are many of them present.

Itanium v2, here we come!

Kazinsal
Dec 13, 2011



What x86 needed thirty years ago was more than eight loving general purpose registers. Especially since one is the stack pointer and one is the stack frame pointer (unless your code is using frame pointer optimization, which makes debugging a massive pain in the rear end). x86-64 added another eight, but that's still a pitiful number of registers compared to x86's contemporaries. The 68000 had eight data registers and eight address registers, though one of them was a dedicated stack pointer that was automatically swapped out in a transition between user and kernel mode. ARM has a whole mess of registers, including a number that are windowed depending on what CPU mode you're in, such that you have a different known stack pointer and link register for each exception type.

x86 already has register renaming internally for parallelism in micro-ops. It has for decades. But until x86-64, you had six, sometimes seven registers to work with. That's it. A lot of intermediate results in calculations had to be at best a cache hit, and possibly a memory hit or -- worse yet -- a page fault and resultant memory juggling.

e: Itanium, on the other hand, had 128 general-purpose integer registers, 128 floating point registers, 64 one-bit predicate registers (used for compare results), and eight branch registers.

Kazinsal fucked around with this message at 13:27 on Dec 27, 2016

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Is there a reason Intel can't just introduce a new public architecture, that maybe fixes and/or improves issues that may come with x86 (are there even any worthwhile pulling such a move?), and add instructions for the OS to switch the CPU between decoders? So that the OS can run executables of both x86 and the new poo poo, and eventually Intel can do away with the x86 one? Or anything in the CPU based on virtual memory addresses, like everything past 64GB are new instructions, so that everything is transparent and can be mixed, just by letting the OS load at the respective addresses?

--edit: ^^^ I guess tons more registers might be an idea for an overhauled instruction set.
--edit: Mixing things transparently would be a bitch for call conventions, if you wanted to introduce new ones, I figure.

Combat Pretzel fucked around with this message at 18:33 on Dec 27, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply