|
Malcolm XML posted:Worst part of Intel monopoly is the lack of chipset io improvements. We've been stuck on pcie3 for like a decade. Lack of usbc is probably related to intel not improving the PCH. Non integrated SoC means you can keep previous factories running. It also means you don't have to get your new high speed buses and all the headache that entails working on a bleeding edge process
|
# ? Jun 16, 2018 02:55 |
|
|
# ? Apr 27, 2024 19:51 |
|
Aren't they using their 22nm process for chipsets still? Its not bleeding edge but it still pretty darn fast, refined, flexible, and mature. I've been seeing industry commentary for years at places like EETimes that sub 32nm processes should, finally, allow for affordable if not cheap 10Gbe switches and such but that still hasn't really materialized yet. They have been getting cheaper but I still wouldn't call them affordable or cheap yet.
|
# ? Jun 16, 2018 08:48 |
|
Obviously this question involves a fair amount of speculation, but: How long do you all think a quad-core i7 (e.g., my 6700K) will remain viable as a high-end gaming CPU in the face of the escalating core wars? The rumors of 8-core Coffee Lake chips are pretty tempting (or, Zen 2 if they can get the clock speeds up a bit more), but I’m back and forth on if it would be beneficial or not. Not that my 6700K is showing its age at all, but with a new console generation around the corner it is a little tempting to pick up an 8-core chip and know I’m set for 5+ years. And, I could always find something to do with the 6700K elsewhere.
|
# ? Jun 16, 2018 23:47 |
|
Space Racist posted:Obviously this question involves a fair amount of speculation, but: I'm pulling this figure entirely out of my rear end (with educated guessing), but I'd say you'll probably be pretty comfortable for the next two years or so. Unless some new piss-easy programming language/method comes along or they seriously refine Vulkan, very few games programmers seem to have their poo poo together with regards to programming for multicore/multithreaded CPUs. We're recommending the 8700K to new builders who can swing it in the building thread simply because there's going to come a point where they'll be *forced* to make full use of a chip's potential, physical cores and virtual threads alike. Also, so long as Intel's forced to stick on 14nm (no matter how many ~pluses~ they put to the right of it), the octacores will likely be *slower* for single-threaded game-intensive performance. Whereas it's not *terribly* difficult to get ~4.8/4.9/5.0 out of an 8700K, just the thought of adding an additional two cores to that package makes me think people are really going to struggle to get 4.5 out of the octas. Initial guesses at pricing for the CPU are flirting with $500-550, and no one's willing to even fathom a guess about what a mid-range Z390 board will cost. Rumor has it the reason we've not seen ~over-the-top~ Z370 boards is that the boardmakers are reserving their 'pull out all the stops'/kitchen sink boards for the Z390. It would not shock me in the slightest if the octacore i7, mid-range Z390 board, and sufficient HSF for the TDP will not run you $1000 *alone*. Intel's not stupid - they're not going to obsolete their entry-level HEDT option without selling it at a similar price point. BIG HEADLINE fucked around with this message at 23:58 on Jun 16, 2018 |
# ? Jun 16, 2018 23:53 |
|
BIG HEADLINE posted:We're recommending the 8700K to new builders who can swing it in the building thread simply because there's going to come a point where they'll be *forced* to make full use of a chip's potential, physical cores and virtual threads alike. This, and also the 8700K clocks just as high as its predecessors so it's not like there's a better alternative for <=4 thread work. BIG HEADLINE posted:It would not shock me in the slightest if the octacore i7, mid-range Z390 board, and sufficient HSF for the TDP will not run you $1000 *alone*. Intel's not stupid - they're not going to obsolete their entry-level HEDT option without selling it at a similar price point. I'd totally agree with this a few years ago, but right now I'd say that Intel's not going to obsolete their entry-level HEDT option because AMD will do it for them (has already done it for them - does anyone buy the 6800X these days?) and if they don't make their hypothetical i7-9700K price-competitive with a hypothetical R7-3700X then it's at their own peril. Especially so if said 9700K is going to have issues breaking 4.5GHz without exotic cooling, since a 2700X is already within spitting distance of that just using turbo. Eletriarnation fucked around with this message at 00:36 on Jun 17, 2018 |
# ? Jun 17, 2018 00:32 |
|
WhyteRyce posted:Non integrated SoC means you can keep previous factories running. It also means you don't have to get your new high speed buses and all the headache that entails working on a bleeding edge process LOL so they are really hosed by TMG. AMD has no need to keep old depreciated fabs running
|
# ? Jun 17, 2018 01:01 |
Samsung, at least, runs 65nm in the same factory they run 14, 10, 7nm. Granted the older nodes dont need the same super precise lens arrays and double patterning 14 and on need but theres no reason why they have to get rid of the simpler machines considering they use them for the larger parts of the newer nodes (like the top 3 metal layers, for instance)
|
|
# ? Jun 17, 2018 01:12 |
|
Malcolm XML posted:LOL so they are really hosed by TMG. AMD has no need to keep old depreciated fabs running Anyone who spent the money getting a fab up has every reason to keep it going for a long as possible. They cost a shitload of money but basically print money once they are up
|
# ? Jun 17, 2018 01:16 |
|
AMD and Intel have like the same block diagram on desktop (excluding the APUs), the differences being Intel has graphics and AMD has sound and a USB hub on-die. Neither have SATA or networking on die. On mobile they are slightly different, AMD can nearly be a SoC if you use only NVME, but you still need external networking. Intel sells their Coffee Lake parts with the south bridge, with gigabit and WiFi builtin, on one package with the CPU anyway. So yeah, they aren’t that different.
|
# ? Jun 17, 2018 02:04 |
|
Watermelon Daiquiri posted:Samsung, at least, runs 65nm in the same factory they run 14, 10, 7nm. Granted the older nodes dont need the same super precise lens arrays and double patterning 14 and on need but theres no reason why they have to get rid of the simpler machines considering they use them for the larger parts of the newer nodes (like the top 3 metal layers, for instance) The last statistic I saw about this was 10% of all chips made garner 90% of the total revenue. Guess where those 90% chips like $0.01 LED drivers get made? (hint: not the bleeding edge nodes)
|
# ? Jun 17, 2018 05:31 |
|
BIG HEADLINE posted:
I don't see a reason why the 8C should clock lower than a what we have now. 7700K -> 8700K showed us that cooling appears to be the only limitation, but with a decent aircooler or AIO you'll be able to set a TDP limit in the BIOS to avoid it from getting too warm (>80°C) during rendering, video encoding or silly synthetic AVX benchmarks. That way your 8C Coffee Lake will run at 5.0-5.2 GHz unless all cores/threads are at 100% load all the time, which never happens in games as long as they're bound by the render thread(s). In fact when gaming it should actually run slightly cooler than a 8700K because it'll have similar heat output over a larger die size.
|
# ? Jun 17, 2018 09:21 |
|
WhyteRyce posted:Anyone who spent the money getting a fab up has every reason to keep it going for a long as possible. They cost a shitload of money but basically print money once they are up Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer. It might be cheap to keep depreciated fabs online but only if you have a need for them. Intel is stuck with stranded assets if they can't keep them running.
|
# ? Jun 17, 2018 18:59 |
|
Malcolm XML posted:Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer. I forget the exact numbers but something like 90% of the cost of a chip is the R&D and building the fab. Once you have the fab and the process up and running the marginal cost is next to nothing. So your options are: A) Keep printing money from and old fab and make whatever you can slap in there B) Stop making money and pay money to tear the old fab Why the hell would you chose B?
|
# ? Jun 17, 2018 19:03 |
|
Malcolm XML posted:Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer. Intel definitely does fab for clients. Usually on a simplified stack, not quite as good as they use internally, and not on their latest nodes, but still. Even Intel needs to make the most of their investment.
|
# ? Jun 17, 2018 19:22 |
|
Malcolm XML posted:Sure but AMD can take advantage of that through outsourcing it to TSMC/GF/Samsung while intel is its only fab customer. This was in response to the it being weird statement, which is not true because it made a whole lot of sense for Intel to keep the PCH on a previous node. It may be a problem now but on the list of problems that Intel may have it's probably not the most pressing one. WhyteRyce fucked around with this message at 19:42 on Jun 17, 2018 |
# ? Jun 17, 2018 19:37 |
|
Lmao https://twitter.com/FanlessTech/status/1008430324798382082?s=19
|
# ? Jun 17, 2018 23:48 |
|
Xae posted:I forget the exact numbers but something like 90% of the cost of a chip is the R&D and building the fab. Missing the point: Intel as a fab owner is required to do this. They cannot even if they wanted to, move the PCH to a newer process without filling up the older fabs TSMC will simply price it in to the fabrication costs. Amd can choose to which to use and is unconstrained by stranding assets. Intel has very few fab clients fwiw beside itself. Their processes are designed for internal use first.
|
# ? Jun 18, 2018 01:11 |
|
Anyway the point was that TMG and owning fabs makes sense if and only if you can keep fabs utilization up, but making better technology forces you to move to newer processes. The PCH not being mostly integrated is an artifact of having excess trailing edge node capacity and not needing to compete with a Soc (apparently hpc loving loves these) It also let's intel charge more for two chips, and let's them bottleneck competition like nvidia.
|
# ? Jun 18, 2018 01:15 |
|
It's sure as poo poo easier bringing your high speed IOs up on a mature process than having your entire product dealing new process issues.
|
# ? Jun 18, 2018 01:26 |
|
poo poo, if anything the 10nm delays have supported the decision to keep the PCH and CPU separate because they are at least able to launch the 390 with Coffeelake
WhyteRyce fucked around with this message at 01:39 on Jun 18, 2018 |
# ? Jun 18, 2018 01:36 |
|
To be fair, a ton of 32nm fab space is being used for flash, I believe.
|
# ? Jun 18, 2018 04:01 |
|
Methylethylaldehyde posted:To be fair, a ton of 32nm fab space is being used for flash, I believe. I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable.
|
# ? Jun 18, 2018 04:09 |
|
movax posted:I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable. I think they still fab their LTE Modems elsewhere since apparently 10nm and even 14nm suck rear end for analog or mixed signal. WhyteRyce posted:It's sure as poo poo easier bringing your high speed IOs up on a mature process than having your entire product dealing new process issues. I guess but cheap high speed serdes are available only on smaller nodes so it explains why 10gbe and more usb 3.1g2 is taking so long. Or Intel is just milking old assets, like any capitalist corporation would.
|
# ? Jun 18, 2018 04:49 |
|
i too am very mad at intel for *checks notes* not including a 4 port USB3 hub on die
|
# ? Jun 18, 2018 05:03 |
|
Cygni posted:i too am very mad at intel for *checks notes* not including a 4 port USB3 hub on die Unironically mad at this. I will die on this hill
|
# ? Jun 18, 2018 05:06 |
|
The less USB in my life the better. Thankfully I don’t have anything that requires 3.0. Though, Kudos to the team responsible for Intel USB 2.0 silicon and drivers, it’s been the least troublesome USB solution I’ve ever dealt with. I imagine it’s been the same SIP guts from the ICH5-ICH6 days.
|
# ? Jun 18, 2018 05:13 |
|
movax posted:I always thought it was n for CPUs, n-1 for PCHs / high-performance ASICs (maybe some 10GbE stuff) and then n-2 and beyond for flash and whatever else. Insert their LTE modems whereever applicable. For flash you want big chunky features on a completely perfected fab process, because when you layer the cells 64x deep, and have 1000+ processing steps needed, you want it to have as few defects as humanly possible. Also, the size all depends on who you're making it for, how much they're willing to spend on tapeout and debug, and how much they desperately need the power savings. Once 7nm comes out, we should see a ton of 10GbE or better stuff come way down in price, as 14nm production shifts over.
|
# ? Jun 18, 2018 06:35 |
|
KOTEX GOD OF BLOOD posted:Isn't there some kind of quantum physics issue with that (disclaimer: I studied political science and remember reading about this on wikipedia) Usually the theoretical limitation cited which would prevent the gate lengths of transistors from being arbitrary small is direct quantum mechanical tunneling from source to drain. This problem would mean that the transistors still conduct electricity when switched off and can't really be fully switched off. There is an expectation for transistors in computer chips that the transistors conduct little electricity when switched off, and because of this expectation, circuits are designed so that most of the transistors on a computer chip spend most of their time sitting idle and switched off. silence_kit fucked around with this message at 13:39 on Jun 18, 2018 |
# ? Jun 18, 2018 12:07 |
|
Intel tears out older node fab capacity all the time. They brownfield practically every new node. The model of "build a new fab, run until death" is the TSMC model, which works really great for them for a lot of reasons.
|
# ? Jun 18, 2018 21:27 |
|
movax posted:The less USB in my life the better. Thankfully I don’t have anything that requires 3.0. This is a new complaint.. what could anyone possibly have against USB?!
|
# ? Jun 18, 2018 21:33 |
|
redeyes posted:This is a new complaint.. what could anyone possibly have against USB?! I’m being hyperbolic, but most of my BSODs have stemmed from USB 3.0 drivers (not Intel admittedly) and its a bit annoying from a protocol and electrical perspective (for example, electrically isolating 480Mbps USB 2.0 is difficult due to the bi-directional NRZI physical layer). (It’s really not bad and did wonder for consumers.)
|
# ? Jun 18, 2018 23:28 |
|
I still hate the drat connectors on usb.
|
# ? Jun 18, 2018 23:44 |
|
priznat posted:I still hate the drat connectors on usb. I have a Motorola x4 phone (and Nexus 5x before that) and I love the USB C style connector. I eagerly await the bright future of USB A connectors all going to C
|
# ? Jun 18, 2018 23:51 |
|
USB C is unfortunately bad in other ways unrelated to the standard as much as manufacturers being so flakey.
|
# ? Jun 18, 2018 23:56 |
|
I was trying to think, I only have HDs that are USB 3.0. USB C is a loving bummer though. I get the feeling its going to be a dead technology except for Macs and Phones.
|
# ? Jun 19, 2018 00:10 |
|
I think Type-C is here to stay but we're going to continue to see a 2-tiered model for a while with desktops. Lots of accessories are still Type-A only like mouse/KB receivers, wireless adapters etc. and most peripherals with removable cables still use A-B cables unless they support 10Gbps operation. For that reason it would be strange to put out a motherboard without some rear Type-A ports or a case without any in the front. Many desktops have only one Type-C port if any and it's the fastest port in the box, so that really discourages making Type-C accessories which don't need the speed - why take up your one fast port with a mouse receiver? It would help if motherboard OEMs start mixing in some Type-C where they currently have A even if they don't want to dedicate the bandwidth to make it 3.1 capable, but I can understand that it's probably cheaper and confuses users less to just use Type-C for 3.1. With laptops though, I feel like the writing is on the wall. A single port that is not only smaller than Type-A in both dimensions but can carry AV+data+power and is reversible is too good to pass up. Larger or more budget machines will continue to have some Type-A ports for a while, but we're already seeing PC manufacturers follow Apple's lead on the more premium thin&light machines and go all or nearly all Type-C.
|
# ? Jun 19, 2018 02:37 |
|
We still really don’t need USB-C on desktops. I’m sorry if this upsets some of you. Have they even sorted out how to make a front panel connector?
|
# ? Jun 19, 2018 05:34 |
|
priznat posted:I still hate the drat connectors on usb. I have a MIDI keyboard with a USB-B connector and I can make it disconnect from the computer while still in the socket by blowing on it.
|
# ? Jun 19, 2018 05:39 |
|
USB C is gonna replace B, Mini, and most of Micro... but probably not A for most stuff.
|
# ? Jun 19, 2018 06:54 |
|
|
# ? Apr 27, 2024 19:51 |
|
VulgarandStupid posted:We still really don’t need USB-C on desktops. I’m sorry if this upsets some of you. Have they even sorted out how to make a front panel connector? https://forums.somethingawful.com/showthread.php?threadid=3774409&pagenumber=475&perpage=40#post485007374 They exist, but are on few boards and cases.
|
# ? Jun 19, 2018 07:34 |