Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
WhyteRyce
Dec 30, 2001

Malcolm XML posted:

Worst part of Intel monopoly is the lack of chipset io improvements. We've been stuck on pcie3 for like a decade. Lack of usbc is probably related to intel not improving the PCH.


Also non integrated chipset is weird as heck. Amd moved to a soc.

Non integrated SoC means you can keep previous factories running. It also means you don't have to get your new high speed buses and all the headache that entails working on a bleeding edge process

Adbot
ADBOT LOVES YOU

PC LOAD LETTER
May 23, 2005
WTF?!
Aren't they using their 22nm process for chipsets still?

Its not bleeding edge but it still pretty darn fast, refined, flexible, and mature. I've been seeing industry commentary for years at places like EETimes that sub 32nm processes should, finally, allow for affordable if not cheap 10Gbe switches and such but that still hasn't really materialized yet. They have been getting cheaper but I still wouldn't call them affordable or cheap yet.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~
Obviously this question involves a fair amount of speculation, but:
How long do you all think a quad-core i7 (e.g., my 6700K) will remain viable as a high-end gaming CPU in the face of the escalating core wars?

The rumors of 8-core Coffee Lake chips are pretty tempting (or, Zen 2 if they can get the clock speeds up a bit more), but I’m back and forth on if it would be beneficial or not. Not that my 6700K is showing its age at all, but with a new console generation around the corner it is a little tempting to pick up an 8-core chip and know I’m set for 5+ years. And, I could always find something to do with the 6700K elsewhere.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Space Racist posted:

Obviously this question involves a fair amount of speculation, but:
How long do you all think a quad-core i7 (e.g., my 6700K) will remain viable as a high-end gaming CPU in the face of the escalating core wars?

The rumors of 8-core Coffee Lake chips are pretty tempting (or, Zen 2 if they can get the clock speeds up a bit more), but I’m back and forth on if it would be beneficial or not. Not that my 6700K is showing its age at all, but with a new console generation around the corner it is a little tempting to pick up an 8-core chip and know I’m set for 5+ years. And, I could always find something to do with the 6700K elsewhere.

I'm pulling this figure entirely out of my rear end (with educated guessing), but I'd say you'll probably be pretty comfortable for the next two years or so. Unless some new piss-easy programming language/method comes along or they seriously refine Vulkan, very few games programmers seem to have their poo poo together with regards to programming for multicore/multithreaded CPUs. We're recommending the 8700K to new builders who can swing it in the building thread simply because there's going to come a point where they'll be *forced* to make full use of a chip's potential, physical cores and virtual threads alike.

Also, so long as Intel's forced to stick on 14nm (no matter how many ~pluses~ they put to the right of it), the octacores will likely be *slower* for single-threaded game-intensive performance. Whereas it's not *terribly* difficult to get ~4.8/4.9/5.0 out of an 8700K, just the thought of adding an additional two cores to that package makes me think people are really going to struggle to get 4.5 out of the octas. Initial guesses at pricing for the CPU are flirting with $500-550, and no one's willing to even fathom a guess about what a mid-range Z390 board will cost. Rumor has it the reason we've not seen ~over-the-top~ Z370 boards is that the boardmakers are reserving their 'pull out all the stops'/kitchen sink boards for the Z390.

It would not shock me in the slightest if the octacore i7, mid-range Z390 board, and sufficient HSF for the TDP will not run you $1000 *alone*. Intel's not stupid - they're not going to obsolete their entry-level HEDT option without selling it at a similar price point.

BIG HEADLINE fucked around with this message at 23:58 on Jun 16, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BIG HEADLINE posted:

We're recommending the 8700K to new builders who can swing it in the building thread simply because there's going to come a point where they'll be *forced* to make full use of a chip's potential, physical cores and virtual threads alike.

This, and also the 8700K clocks just as high as its predecessors so it's not like there's a better alternative for <=4 thread work.

BIG HEADLINE posted:

It would not shock me in the slightest if the octacore i7, mid-range Z390 board, and sufficient HSF for the TDP will not run you $1000 *alone*. Intel's not stupid - they're not going to obsolete their entry-level HEDT option without selling it at a similar price point.

I'd totally agree with this a few years ago, but right now I'd say that Intel's not going to obsolete their entry-level HEDT option because AMD will do it for them (has already done it for them - does anyone buy the 6800X these days?) and if they don't make their hypothetical i7-9700K price-competitive with a hypothetical R7-3700X then it's at their own peril. Especially so if said 9700K is going to have issues breaking 4.5GHz without exotic cooling, since a 2700X is already within spitting distance of that just using turbo.

Eletriarnation fucked around with this message at 00:36 on Jun 17, 2018

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

WhyteRyce posted:

Non integrated SoC means you can keep previous factories running. It also means you don't have to get your new high speed buses and all the headache that entails working on a bleeding edge process

LOL so they are really hosed by TMG. AMD has no need to keep old depreciated fabs running

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Samsung, at least, runs 65nm in the same factory they run 14, 10, 7nm. Granted the older nodes dont need the same super precise lens arrays and double patterning 14 and on need but theres no reason why they have to get rid of the simpler machines considering they use them for the larger parts of the newer nodes (like the top 3 metal layers, for instance)

WhyteRyce
Dec 30, 2001

Malcolm XML posted:

LOL so they are really hosed by TMG. AMD has no need to keep old depreciated fabs running

Anyone who spent the money getting a fab up has every reason to keep it going for a long as possible. They cost a shitload of money but basically print money once they are up

Cygni
Nov 12, 2005

raring to post

AMD and Intel have like the same block diagram on desktop (excluding the APUs), the differences being Intel has graphics and AMD has sound and a USB hub on-die. Neither have SATA or networking on die.

On mobile they are slightly different, AMD can nearly be a SoC if you use only NVME, but you still need external networking. Intel sells their Coffee Lake parts with the south bridge, with gigabit and WiFi builtin, on one package with the CPU anyway. So yeah, they aren’t that different.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Watermelon Daiquiri posted:

Samsung, at least, runs 65nm in the same factory they run 14, 10, 7nm. Granted the older nodes dont need the same super precise lens arrays and double patterning 14 and on need but theres no reason why they have to get rid of the simpler machines considering they use them for the larger parts of the newer nodes (like the top 3 metal layers, for instance)

The last statistic I saw about this was 10% of all chips made garner 90% of the total revenue. Guess where those 90% chips like $0.01 LED drivers get made? (hint: not the bleeding edge nodes)

eames
May 9, 2009

BIG HEADLINE posted:


Also, so long as Intel's forced to stick on 14nm (no matter how many ~pluses~ they put to the right of it), the octacores will likely be *slower* for single-threaded game-intensive performance. Whereas it's not *terribly* difficult to get ~4.8/4.9/5.0 out of an 8700K, just the thought of adding an additional two cores to that package makes me think people are really going to struggle to get 4.5 out of the octas.

I don't see a reason why the 8C should clock lower than a what we have now. 7700K -> 8700K showed us that cooling appears to be the only limitation, but with a decent aircooler or AIO you'll be able to set a TDP limit in the BIOS to avoid it from getting too warm (>80°C) during rendering, video encoding or silly synthetic AVX benchmarks.
That way your 8C Coffee Lake will run at 5.0-5.2 GHz unless all cores/threads are at 100% load all the time, which never happens in games as long as they're bound by the render thread(s).
In fact when gaming it should actually run slightly cooler than a 8700K because it'll have similar heat output over a larger die size.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

WhyteRyce posted:

Anyone who spent the money getting a fab up has every reason to keep it going for a long as possible. They cost a shitload of money but basically print money once they are up

Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer.

It might be cheap to keep depreciated fabs online but only if you have a need for them. Intel is stuck with stranded assets if they can't keep them running.

Xae
Jan 19, 2005

Malcolm XML posted:

Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer.

It might be cheap to keep depreciated fabs online but only if you have a need for them. Intel is stuck with stranded assets if they can't keep them running.

I forget the exact numbers but something like 90% of the cost of a chip is the R&D and building the fab.

Once you have the fab and the process up and running the marginal cost is next to nothing.

So your options are:

A) Keep printing money from and old fab and make whatever you can slap in there
B) Stop making money and pay money to tear the old fab


Why the hell would you chose B?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Malcolm XML posted:

Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer.

It might be cheap to keep depreciated fabs online but only if you have a need for them. Intel is stuck with stranded assets if they can't keep them running.

Intel definitely does fab for clients. Usually on a simplified stack, not quite as good as they use internally, and not on their latest nodes, but still. Even Intel needs to make the most of their investment.

WhyteRyce
Dec 30, 2001

Malcolm XML posted:

Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer.

It might be cheap to keep depreciated fabs online but only if you have a need for them. Intel is stuck with stranded assets if they can't keep them running.

This was in response to the it being weird statement, which is not true because it made a whole lot of sense for Intel to keep the PCH on a previous node. It may be a problem now but on the list of problems that Intel may have it's probably not the most pressing one.

WhyteRyce fucked around with this message at 19:42 on Jun 17, 2018

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Lmao

https://twitter.com/FanlessTech/status/1008430324798382082?s=19

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Xae posted:

I forget the exact numbers but something like 90% of the cost of a chip is the R&D and building the fab.

Once you have the fab and the process up and running the marginal cost is next to nothing.

So your options are:

A) Keep printing money from and old fab and make whatever you can slap in there
B) Stop making money and pay money to tear the old fab


Why the hell would you chose B?

Missing the point: Intel as a fab owner is required to do this. They cannot even if they wanted to, move the PCH to a newer process without filling up the older fabs


TSMC will simply price it in to the fabrication costs. Amd can choose to which to use and is unconstrained by stranding assets.

Intel has very few fab clients fwiw beside itself. Their processes are designed for internal use first.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Anyway the point was that TMG and owning fabs makes sense if and only if you can keep fabs utilization up, but making better technology forces you to move to newer processes.

The PCH not being mostly integrated is an artifact of having excess trailing edge node capacity and not needing to compete with a Soc (apparently hpc loving loves these)

It also let's intel charge more for two chips, and let's them bottleneck competition like nvidia.

WhyteRyce
Dec 30, 2001

It's sure as poo poo easier bringing your high speed IOs up on a mature process than having your entire product dealing new process issues.

WhyteRyce
Dec 30, 2001

poo poo, if anything the 10nm delays have supported the decision to keep the PCH and CPU separate because they are at least able to launch the 390 with Coffeelake

WhyteRyce fucked around with this message at 01:39 on Jun 18, 2018

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
To be fair, a ton of 32nm fab space is being used for flash, I believe.

movax
Aug 30, 2008

Methylethylaldehyde posted:

To be fair, a ton of 32nm fab space is being used for flash, I believe.

I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

movax posted:

I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable.

I think they still fab their LTE Modems elsewhere since apparently 10nm and even 14nm suck rear end for analog or mixed signal.

WhyteRyce posted:

It's sure as poo poo easier bringing your high speed IOs up on a mature process than having your entire product dealing new process issues.

I guess but cheap high speed serdes are available only on smaller nodes so it explains why 10gbe and more usb 3.1g2 is taking so long.

Or Intel is just milking old assets, like any capitalist corporation would.

Cygni
Nov 12, 2005

raring to post

i too am very mad at intel for *checks notes* not including a 4 port USB3 hub on die

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Cygni posted:

i too am very mad at intel for *checks notes* not including a 4 port USB3 hub on die

Unironically mad at this.

I will die on this hill

movax
Aug 30, 2008

The less USB in my life the better. Thankfully I don’t have anything that requires 3.0.

Though, Kudos to the team responsible for Intel USB 2.0 silicon and drivers, it’s been the least troublesome USB solution I’ve ever dealt with. I imagine it’s been the same SIP guts from the ICH5-ICH6 days.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

movax posted:

I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable.

For flash you want big chunky features on a completely perfected fab process, because when you layer the cells 64x deep, and have 1000+ processing steps needed, you want it to have as few defects as humanly possible.

Also, the size all depends on who you're making it for, how much they're willing to spend on tapeout and debug, and how much they desperately need the power savings.

Once 7nm comes out, we should see a ton of 10GbE or better stuff come way down in price, as 14nm production shifts over.

silence_kit
Jul 14, 2011

by the sex ghost

KOTEX GOD OF BLOOD posted:

Isn't there some kind of quantum physics issue with that (disclaimer: I studied political science and remember reading about this on wikipedia)

Usually the theoretical limitation cited which would prevent the gate lengths of transistors from being arbitrary small is direct quantum mechanical tunneling from source to drain. This problem would mean that the transistors still conduct electricity when switched off and can't really be fully switched off.

There is an expectation for transistors in computer chips that the transistors conduct little electricity when switched off, and because of this expectation, circuits are designed so that most of the transistors on a computer chip spend most of their time sitting idle and switched off.

silence_kit fucked around with this message at 13:39 on Jun 18, 2018

canyoneer
Sep 13, 2005


I only have canyoneyes for you
Intel tears out older node fab capacity all the time.
They brownfield practically every new node. The model of "build a new fab, run until death" is the TSMC model, which works really great for them for a lot of reasons.

redeyes
Sep 14, 2002

by Fluffdaddy

movax posted:

The less USB in my life the better. Thankfully I don’t have anything that requires 3.0.

Though, Kudos to the team responsible for Intel USB 2.0 silicon and drivers, it’s been the least troublesome USB solution I’ve ever dealt with. I imagine it’s been the same SIP guts from the ICH5-ICH6 days.

This is a new complaint.. what could anyone possibly have against USB?!

movax
Aug 30, 2008

redeyes posted:

This is a new complaint.. what could anyone possibly have against USB?!

I’m being hyperbolic, but most of my BSODs have stemmed from USB 3.0 drivers (not Intel admittedly) and its a bit annoying from a protocol and electrical perspective (for example, electrically isolating 480Mbps USB 2.0 is difficult due to the bi-directional NRZI physical layer).

(It’s really not bad and did wonder for consumers.)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I still hate the drat connectors on usb.

canyoneer
Sep 13, 2005


I only have canyoneyes for you

priznat posted:

I still hate the drat connectors on usb.

I have a Motorola x4 phone (and Nexus 5x before that) and I love the USB C style connector.
I eagerly await the bright future of USB A connectors all going to C

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
USB C is unfortunately bad in other ways unrelated to the standard as much as manufacturers being so flakey.

redeyes
Sep 14, 2002

by Fluffdaddy
I was trying to think, I only have HDs that are USB 3.0. USB C is a loving bummer though. I get the feeling its going to be a dead technology except for Macs and Phones.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I think Type-C is here to stay but we're going to continue to see a 2-tiered model for a while with desktops. Lots of accessories are still Type-A only like mouse/KB receivers, wireless adapters etc. and most peripherals with removable cables still use A-B cables unless they support 10Gbps operation. For that reason it would be strange to put out a motherboard without some rear Type-A ports or a case without any in the front. Many desktops have only one Type-C port if any and it's the fastest port in the box, so that really discourages making Type-C accessories which don't need the speed - why take up your one fast port with a mouse receiver? It would help if motherboard OEMs start mixing in some Type-C where they currently have A even if they don't want to dedicate the bandwidth to make it 3.1 capable, but I can understand that it's probably cheaper and confuses users less to just use Type-C for 3.1.

With laptops though, I feel like the writing is on the wall. A single port that is not only smaller than Type-A in both dimensions but can carry AV+data+power and is reversible is too good to pass up. Larger or more budget machines will continue to have some Type-A ports for a while, but we're already seeing PC manufacturers follow Apple's lead on the more premium thin&light machines and go all or nearly all Type-C.

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




We still really don’t need USB-C on desktops. I’m sorry if this upsets some of you. Have they even sorted out how to make a front panel connector?

Twinty Zuleps
May 10, 2008

by R. Guyovich
Lipstick Apathy

priznat posted:

I still hate the drat connectors on usb.

I have a MIDI keyboard with a USB-B connector and I can make it disconnect from the computer while still in the socket by blowing on it.

Cygni
Nov 12, 2005

raring to post

USB C is gonna replace B, Mini, and most of Micro... but probably not A for most stuff.

Adbot
ADBOT LOVES YOU

LRADIKAL
Jun 10, 2001

Fun Shoe

VulgarandStupid posted:

We still really don’t need USB-C on desktops. I’m sorry if this upsets some of you. Have they even sorted out how to make a front panel connector?

https://forums.somethingawful.com/showthread.php?threadid=3774409&pagenumber=475&perpage=40#post485007374

They exist, but are on few boards and cases.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply