Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Khorne
May 1, 2002

orcane posted:

Did anyone expect "old games with lovely CPU-bound engines" not to run faster than on the 3.5 years older platform?
I can't even think of a game that isn't CPU bound.

Adbot
ADBOT LOVES YOU

Khorne
May 1, 2002

Paul MaudDib posted:

Dream scenario: AMD drops a big increase in clockrates on 7nm next year (say, 4.7 GHz all-core turbo)
If you think that's a dream you're going to love the next year or two of AMD processors.

Khorne
May 1, 2002

repiv posted:

https://twitter.com/InstLatX64/status/969560033922035713

jesus christ intel, how is anyone supposed to target this mess
"make"

Khorne
May 1, 2002

sincx posted:

The moral of the story is that hard drives are horrible horrible things.

I have started looking into what it would take to replace all my storage with SSDs. Alas, it's still too expensive right now.
Yeah. When 8tb traditional drives are at $140 the SSD can't really compete for mass storage.

Khorne
May 1, 2002

Cygni posted:

Dumb marketing aside, there’s a difference between a “cash grab” and a product that’s just not enticing for the price.
It's the 8700k but they've been grabbing the best samples for months and wrecking people's odds of getting a good 8700k. 8700k is available for around $300 most of the time.

Khorne
May 1, 2002

Mr.Radar posted:

Speaking of Linus and Intel products, he found one of those Intel "XCC" CPUs on Computex show floor but with a "normal" custom ambient water loop setup. He wasn't allowed to run CPUZ but was allowed to run Cinebench. In that configuration it scored just over 6000, which is about twice what the Threadripper 1950X scores (so the 32-core Threadripper 2 would probably beat it due to the Zen+ IPC improvements).
Zen+ has no IPC improvements. At best it has a slight memory latency improvement.

Khorne
May 1, 2002

eames posted:

Uh so if Intel skips 10nm they’ll have to sell refined 14nm CPUs for another 2 years? Maybe we‘ll be up to 10 or 12 cores with Coffee Lake by then.
I'm pretty sure no one expects Intel to be leading desktop/server cpu competition again until 2021 or so. But, who knows what will happen. It's all speculation.

We could be talking about how bad AMD screwed up when given a golden opportunity in a year or two. Intel could pull something out of their rear end at some point. While they've had process problems that have resulted in no new architectures leaking down, surely the guys doing architecture stuff have done a whole hell of a lot of work that we haven't seen in the past 5-6 years?

AMD also has a strong history of having better products and still getting wildly outsold in all markets. It's hard to remember that because during this decade they've had largely inferior products.

Khorne fucked around with this message at 14:36 on Jun 23, 2018

Khorne
May 1, 2002
Outside of the high end of enthusiast processors and overclocking server grade stuff, whether you have air or an AIO won't matter as long as you get something that cools well. The processors are going to hit their limit at the same point provided you have an adequate cooler, and whether it's running at 83c or 60c doesn't matter unless you're expecting huge spikes in ambient temperature. And even then, my i7 3770k maxed out at 78c-82c before delid in 20c ambient and was still stable at up to 48c ambient.

I have a 1070 that doesn't go 30c above ambient on its stock non-blower cooling. It won't go above 56c or so on stock cooling even overvolted at a normal room temperature provided I use an aggressive fan curve. My 3770k would have sucked on anything because the TIM was so bad. After delidding, it's not thermals holding me back. I'm just not willing to throw potentially damaging voltage at ancient processor that I've abused endlessly.

Quality air coolers and the water cooling solutions that cost about the same perform almost identically in terms of noise and thermals. Sometimes, it even favors the air cooler.

AIOs take up less functional space in the case and people like how they look. Air coolers are a big hunk of metal with some fans attached. If you have an i7 or lower or current gen ryzen stuff, that's about the only difference between them besides "perception". If you want to venture into more expensive than air territory, water cooling can have a higher price cap, and at those higher prices you will get quieter setups with a lower aesthetic number

Khorne fucked around with this message at 13:04 on Jun 28, 2018

Khorne
May 1, 2002

quote:

I have a simple theory about this kind of stuff: as I tell people my opinions, one of two things happen. They like what I say, or they correct me when I'm wrong.
Man, I get the last one about 100% of the time.

Khorne
May 1, 2002

Palladium posted:

Besides the majority of Intel's customer base can't even be bothered to check what CPU gen they are buying (there's still a hilarious % of DIYers who bought 7700Ks after Coffee Lake), let alone care about EDRAM.
"I always buy one generation behind" - something people actually say to me even though it makes absolutely no sense in the building a computer world 95% of the time. The other 5% is like ryzen 1700 being $140 pricing or before the GPU aftermarket prices got screwed up buying a used last gen GPU for a good price.

They're the same people who buy any old intel motherboard because it's "compatible with i5, all i5s should work with all i5 motherboards" and then damage the socket and processor.

Khorne
May 1, 2002

k-uno posted:

Thanks to everyone who's chimed in! I should explain a bit-- I have access to a cluster at my university, but I don't end up needing to use it very often. Much of my work starts as pen and paper calculations that then get explored/verified numerically, and when, inevitably, something does not act like how I expect it to there's lots of tinkering and trial and error which follows. This usually involves lots of short calculations that take minutes or hours, and using a cluster for that is really annoying given the hassle of submitting jobs into a queue and waiting for them to execute. When things do start to stretch into days they usually get submitted to the cluster, but those nodes are all two socket xeon machines so the NUMA issues I was asking about crop up.
Provided they use any modern scheduler, you can specify the resources you want with the job submission. Right down to socket for NUMA grouping. Googling "schedulernamehere affinity socket" will probably bring you to documentation.

You can also usually get jobs to dispatch faster by requesting the smallest reasonable duration and less RAM. In many university cluster setups, the queues with a shorter cap on runtime are also favored for quick job dispatch.

There's all kinds of neat stuff on the scheduler side of HPC that researchers never use because they don't know it exists.

Anyhow, it sounds like just building beefy lab machines is probably good enough for your use case.

Khorne fucked around with this message at 01:04 on Aug 12, 2018

Khorne
May 1, 2002

Cygni posted:

There is no physical way to get 8 Lake cores at 5ghz on a 95w budget, so we either lookin at a situation where it will just blow by the TDP, or something like the 8700 non-K where the chip will boost to that number for 30 seconds or so and then cut itself back to stay under the 95w budget, unless you go into the BIOS and let it ignore the TDP.

For comparison, the 8 core 7820X will pull over 180 watts by itself at max load at 4ghz, and something like 300 watts when overclocked to 5ghz. So this might actually be a situation will your VRM may matter on a Z390 board.

edit: Misinterpreted what was being posted.

Khorne fucked around with this message at 22:48 on Aug 16, 2018

Khorne
May 1, 2002

fishmech posted:

At least AMD makes it simple for CPU lines on laptops: if it's AMD it's a bad laptop.

:v:
AMD's current gen laptop CPUs aren't bad. They're just being put in garbage laptops by manufacturers. :(

Khorne
May 1, 2002

GRINDCORE MEGGIDO posted:

Is there any old simd hardware they can remove?
The core adjacent to the iGPU is consistently 5c-10c lower on certain gens of intel CPUs.

Do you see where I'm going with this?

Khorne
May 1, 2002

sauer kraut posted:

Ok so the 9700K is 69° at 141W gets praised over the moon, while the 9900K at 70°/144W is "one warm chip"
Video game journalism right there.
Pretty much. I ran my 3770k at 88c with an NH-D14 for over 6 years. There's no down side.

I suppose under normal loads it was closer to 78c-80c.

5-5.3GHz on an 8 core CPU with current processes is going to throw off serious heat. That's just physics. Thermals are a poor reason to slam the CPU because it doesn't risk degradation or impede its performance. The revised power draw numbers are reasonable too unless you're comparing them vs zen2 next year because you're from the future. If you care about power draw or thermals, clock it at a number lower than 5, scale voltage appropriately, and it will do great.

Cost and cost:performance are reasonable attacks on it. I haven't looked at the benchmarks much yet because I'm trying real hard to hold out for zen2 and I suspect they are "the same as the 8700k, except when the 2 extra cores come into play". Which isn't bad. At the same time, it's just another coffee/sky lake CPU.

Khorne fucked around with this message at 15:04 on Oct 20, 2018

Khorne
May 1, 2002

Zedsdeadbaby posted:

Oh god this probably means ryzen gear will go up in price once AMD sees demand go up for them in the wake of Intel struggling to keep supply up

I really hate sitting on the fence like this
It probably won't. AMD is going to use the current market situation to grab as much share as they can. At least, that seems to be their strategy so far. Maybe zen2 will command a premium because I can't imagine supply will be super super high on it, and it is the first time a new process has been around in ages.

Khorne
May 1, 2002

eames posted:

Somebody (:iiam:) leaked Icelake Geekbench scores on the day after the 10nm media rumors... doubled L2 cache, 50% larger L1D cache, >10% IPC increase. :salt:
The CPU guys haven't been asleep. They just worked on future stuff and never backported. It didn't make business sense to be putting much work into an old node that you'll be off of any year now. Especially when AMD wasn't doing a drat thing, and when they finally were you were still going to be one step ahead of them. Except now you aren't due to delays.

Khorne fucked around with this message at 14:00 on Oct 24, 2018

Khorne
May 1, 2002

HalloKitty posted:

* in AVX workloads, using a compiler deliberately favouring Intel
ICC performance for AMD is generally on par with or slightly better than other compilers these days. Yes, intel gets more out of it, but in the real world you compile with compilers that actually exist.

If I were going to attack those benchmarks it'd be on AVX and memory bandwidth grounds. If it's a 20% increase over existing product then there's nowhere near a 240% performance increase, because EPYC is competitive with the current product. Additionally, AMD's new epyc processors will likely have more memory channels and are coming out "soon".

Khorne fucked around with this message at 16:02 on Nov 5, 2018

Khorne
May 1, 2002

forbidden dialectics posted:

Alright, a lot of testing my 9900k over the holiday. Came to two stable scenarios. Which would you choose:

5.2 GHz all cores, 1.33V with droop under load to around 1.29V, maxes out around 75C under the most intense workloads.

or

5.3 GHz all cores, 1.425V with droop to around 1.38, maxes out around 92C under the most intense workloads?

That's a lot of extra voltage for....100Mhz, which seems pretty silly, and I'm probably whining about a .1-percentile chip anyways. But then again, this whole business is intensely silly.
Set the lower power in your motherboard, use it regularly, and then use OC software for the higher power when you need the extra boost. If you ever do.

That's how I've managed my 3770k and I can still throw >1.4v at it 7 years later. Most of the time I don't need to, but there is an fps game or two I play where I do.

Khorne
May 1, 2002

wargames posted:

I am personally looking forward to :airquote:7nm:airquote: AMD stuff that is 1H 2019?
It's apples and oranges. Intel 10nm is about equivalent to the 7nm TSMC process AMD is using.

Khorne
May 1, 2002

WhyteRyce posted:

To look at it another way and look at just plain consumer vs. enterprise, you do know there is a mountain of work that can be put into a product that isn't measured in pure physical differences. Are you mad about consumers being locked out of enterprise features? Because Intel spends millions putting stuff in for enterprise customers even if it makes no sense for the home user. Are you mad OOB or remote management capabilities aren't available to you unless you buy the right SKU?
Intel's segmentation is ridiculous.
  • Hyperthreading segmentation is pure greed. The functionality is there. They're selling consumers worse CPUs for inflated prices.
  • ECC segmentation makes little sense. All of AMD's cpus and motherboards support ECC, although there's no 'official' support on the bx50 and xx70 motherboards it just works.
  • Locking people out of overclocking, including running RAM at rated speeds, and charging more for it is completely unacceptable.
  • Constantly using a new motherboard socket, even on what is effectively the same process, is ridiculously anti-consumer.
  • Locking clock speeds well below what the silicon is capable of is really anti-consumer. They weren't even locked at the cusp of greatly diminishing returns, which would have made sense.

Those five are definitely anti-consumer intentional segmentation that has little to no impact on profits. The last one was literally done to show 'progress' when they made next to none. They were a behemoth twiddling their thumbs peddling the same desktop CPU year after year. I got a lot of mileage out of my c2d and i7 3770k, but there are millions of people who bought locked i5s/i7s or upgraded to bigger number CPUs that offered little to nothing over the same crap they've sold since. 8700k was the first actual-progress consumer CPU in ages. They added an extra 2 cores which was great. 9900k they added another 2 cores, stopped using awful TIM, and added competent built-in OC so they weren't getting crushed by competition on stock benchmarks.

Don't even get me started on server stuff. With no competition, Intel went full greed in all markets. Now there's real competition forcing them to be more honest which is good for everyone.

Khorne fucked around with this message at 14:06 on Jan 18, 2019

Khorne
May 1, 2002

craig588 posted:

The clock speeds aren't really arbitrarily being locked down. They meet the power envelopes they need to, CPUs can do much higher but go from 100 watts to 1 kilowatt, so hot that even with liquid nitrogen they're seeing positive temperatures.
It really depends on which generation and processor you're looking at. For example, sandy bridge i7 2600 was 3.4 with 3.8 turbo. Ivy bridge was similar (3.4 / 3.9). Both of those could have been clocked more aggressively. And that's not even touching on i5s.

If you're talking about the latest generation, then they're at a very reasonable place if not slightly too aggressive (3.6 base, 4.9 boost). If the previous generations were like that the 2600 would boost to ~4.7, the 3770 would boost to at least 4.2 if not 4.4. If they had used actual solder for the 3770, then 4.6-4.7 boost.

I agree, there are points of diminishing returns and % yields to consider. One of the advantages of enabling overclocking is to allow people to underclock and get more power efficiency out of their processor.

Khorne fucked around with this message at 14:21 on Jan 18, 2019

Khorne
May 1, 2002

Otakufag posted:

Would a 2700x be able to nicely multibox 8 accounts like that or it gets hampered by it's shittier single threaded perf?
A 3570k with a gtx 660 Ti can run 5 accounts at once.

What I'm saying is, yes it can run 8 accounts no problem.

Whether you want to go with a 2700x or 9900k is up to you. I'd expect the performance difference is similar to running 1 copy of the game, but I'm not very up to date on current WoW performance. I just know some idiot who five boxes with that setup I described above.

Khorne fucked around with this message at 17:23 on Feb 27, 2019

Khorne
May 1, 2002

OhFunny posted:

So that seems bad.

Is it something that should worry regular desktop users like me or is it more of a concern for enterprises?
The timescale of the attack seems impractical for most valuable consumer data.

Khorne
May 1, 2002
It's a reasonable move if they can compete with AMD on price once their increased 14nm fab capacity comes online.

10nm from intel isn't going to clock as well as 14nm++..+, and it's unlikely it has a huge ipc increase. Combine that with 10nm intel /7nm tsmc likely being a short lived stepping stone node due to EUV being here for real this time, finally, and you have the decision they made.

Intel is completely dominant in the laptop space and needs to maintain their lead there. The characteristics of the 10nm process are great for that. No one wants to buy a consumer desktop CPU on 10nm that performs worse than the refined 14nm process.

Khorne fucked around with this message at 18:22 on Apr 24, 2019

Khorne
May 1, 2002

PC LOAD LETTER posted:

the 7nm process they're working on will run into similar issues and be just as uncompetitive.

I strongly disagree with this. Intel 7nm uses EUV. Part of the problem with their 10nm process is they went for overly ambitious dimensions, pitch in particular is supposedly around the theoretical limit for the process, without EUV. Lots of 10nm choices were ambitious.

They've been developing 7nm and what comes after 7nm in parallel. Intel does a lot of business. They aren't just a CPU company. I don't even think other fabs have the capacity, given their other customers, to fab all of what intel makes.

No one really knows if it will harm their business longterm, but they should bounce back by 2022 or so. I figured it'd be 2021, but that roadmap seems to hint 2022.

I'm a mild AMD fan, but I really don't see the doom and gloom unless they have problems with 7nm too. Intel's 10nm was real ambitious and does some things the competition's equivalent process doesn't. These things should carry forward to future nodes, and the nature of EUV really benefits how Intel does things.

Intel has a whole lot of safety in the server market. Even if AMD releases a better product. AMD's strategy there is just to sell to the biggest of big. OEM market is where AMD might start cutting in provided there aren't anti-competitive practices in place like in the past. The problem there is with public perception of Intel and AMD. Intel might retain a massive share in that market solely based on their brand.

The enthusiast market is already getting massive amounts of share stolen by AMD. Sales at many places are 50:50 between Intel and AMD CPUs. Or even favoring AMD. The value proposition of ryzen is great at the low-mid end, and not everyone wants or can afford to drop the money on a 9900k. Intel also has no competition for threadripper, but that's a very small market.

Khorne fucked around with this message at 13:10 on Apr 25, 2019

Khorne
May 1, 2002

PC LOAD LETTER posted:

So I'm just gonna point out first of all the post you're replying to was pretty clearly speculative and not a "this is guaranteed to all play out this way" type post there Khorne.
I didn't mean for the post to come across as aggressive or anything. I'm also just speculating.

quote:

EUV is pretty hard to do, harder to do than even the ambitious goals that Intel seems to have set for 10nm, and only relatively recently has it become possible to do on a volume process. [...]
I know 10nm is not designed around EUV. EUV just seems like a bit of a reset button to me once it gets rolling. TSMC already started 5nm risk production.

quote:

I'm not a process expert but everything I've heard about Intel's 10nm (at least as far as what Intel's 10nm was supposed to achieve, so far the real world results are falling short of those goals) vs TSMC's "7nm" is they were supposed to trade blows when it came to transistor density, potential clockspeeds, and all while at comparable power and heat envelopes and that they essentially were considered about equal over all.
The processes are comparable. Intel's 10nm uses cobalt in some of the layers. I don't believe TSMC's processes uses cobalt like Intel's does. Cobalt should be a reasonable advantage going forward, but it may be part of the reason for 10nm's failing. GloFo was planning on using it for their 7nm node that they scrapped if I remember right.

I didn't quote the rest because I agree with what you said.

Khorne fucked around with this message at 13:52 on Apr 25, 2019

Khorne
May 1, 2002

PC LOAD LETTER posted:

Than I'm not really sure why you were bringing EUV into the discussion at all. *shrugs* Maybe I misunderstood your a point your previous comment was trying to make.

I've seen comments from other people who seem to know what they're talking about have conflicted opinions about its use so I don't know if its really all that great or worth the trouble at least for Intel's 10nm process. They generally seemed to think it might be more important, or even necessary, for future smaller processes but that didn't seem clear to everyone either. Either which way it doesn't seem to be a game changer for Intel here. More of a "well that's sure nifty" kinda thing.
I'm talking in the context of future processes. 10nm being a failure doesn't necessarily mean future processes will, and their experience with a material and process that may be valuable for smaller nodes should carry forward quite well. I don't think 10nm will get much better than it is now. I think that's the misunderstanding.

2022 is around when Intel 7nm should hit real-world production, and this roadmap seems to align with that.

Khorne fucked around with this message at 14:14 on Apr 25, 2019

Khorne
May 1, 2002

JawnV6 posted:

what i'm not understanding about MDS is how do you induce the OS/VM/enclave or whatever to put the secret in the fill buffer repeatedly
Most of the recent exploits are not a threat to consumers on their own hardware.

Khorne
May 1, 2002

JawnV6 posted:

okay? idk what that changes but sure, now we're on a cloud host, how do you induce the OS/VM/enclave/other VM guest or whatever to put their secrets into the fill buffer
I'm not sure why I quoted you.

You can't induce the OS/VM/etc to put things there. They have to have already put them in the relevant places "recently". Even with ZombieLoad, the biggest of the four recently disclosed exploits and possibly the most serious to date.

You can see a proof of concept here.

Khorne fucked around with this message at 21:51 on May 16, 2019

Khorne
May 1, 2002

sincx posted:

That's like saying Boeing's 737 MAX design failure isn't a big deal because the plane only crashes occasionally and not all the time.
There's two deals here.

The big deal, which is the failing of design, process, and culture in the company.

The other deal, which is "what does this mean for me, a consumer".

Media and people hype these up like it's going to happen to you. I've seen "your private browsing history is at risk" "your credit cards are at risk" "your passwords are at risk" all day. Unless you're individually targeted by state-level espionage teams, these are all pretty much non-threats. When the original spectre/meltdown dropped and it was possible to execute them from a javascript vm that wasn't even a credible threat. And that's the dream vector vs consumers.

This is the kind of attack that gets placed into an official release of a piece of software that is commonly run by the organization being targeted. Even if you pull that off, you could run it for weeks, manage to get the data out, and still not extract what you were looking for. And analyzing the data is a herculean task in itself with these new exploits. You'd likely to have hope for something that gets ran near startup or shutdown, hope the targeted machine starts up and shuts down a lot, hope you end up on the same physical core, and compare the data to what you think is an identically built environment during startup/shutdown/normal use to get anything meaningful out of it.

If someone successfully uses one of these attacks without relying on additional information about the environment or other exploits/compromised system type stuff my mind will be blown. The data analysis step alone seems daunting for most of these. They don't even come with memory addresses or any hints as to where the raw data came from, and it's going to be a mishmash of everything the CPU has been doing.

Maybe I'm wrong and it's easier to get data out of these than I know. They seemed to have patched it before disclosure this time, but it's hard to tell if that's a PR move or due to it being that much of a threat.

bobfather posted:

So you're saying that mandatory microcode updates that hurt real-world performance for consumers doesn't mean a thing?

I'm actually surprised there's no class action lawsuit against Intel for selling processors based on advertised performance benchmarks that are now impossible to achieve in the real world due to microcode mitigations.
Examples of anything lawsuit worthy? They still hit the same clock speeds. They still perform about as well relative to other Intel CPUs that have been patched for these exploits. I have a hard time believing anyone would hold them accountable for fixing previously unknown exploits with optional microcode updates. Especially if they cease making the claims after the updates.

Khorne fucked around with this message at 00:01 on May 17, 2019

Khorne
May 1, 2002

DrDork posted:

Regardless, that AMD is coming out swinging of late is nothing but a good thing for the CPU industry overall.
Intel is going to be forced to compete on price which is a win for everybody.

Khorne
May 1, 2002

eames posted:

Intel allegedly tried to pay university researchers a... uh, bug bounty that no one asked for.

https://www.techpowerup.com/255563/intel-tried-to-bribe-dutch-university-to-suppress-knowledge-of-mds-vulnerability
That's actually pretty cool. I mean, it clearly benefits Intel but it also benefits researchers.

Khorne
May 1, 2002

SwissArmyDruid posted:

It's okay, guys, I still think Intel's..... okay! I mean, like as not, Thunderbolt is still an Intel-only thing until USB 4, and that's not gonna show up until something 2022 at the earliest.
AMD's x570 boards support thunderbolt. Sort of.

As long as Intel's 7nm products hit in 2021 and are competitive Intel will be fine.

Khorne
May 1, 2002

MaxxBot posted:

It's from "International Businiess Strategies" I'm not sure how accurate the exact numbers are but the costs are definitely ramping up and number of fabs going down.

https://www.extremetech.com/computing/272096-3nm-process-node
The comments on this article are worse than youtube. It's all weird conspiracy theories and jerking it to the military-industrial complex. And worse.

Malcolm XML posted:

Did anyone publically state the reason for the delays? Rumor is that the transition to cobalt hosed them but I don't have a good source for that.
It was a really ambitious node, right on the limits of what you can theoretically do without EUV.

The delays resulted in numerous refinements to 14nm, making 10nm struggle to compete with the very refined 14nm.

They didn't take the time to design new archs for 14nm, because 10nm was perpetually coming for the better part of a decade. This is Intel's biggest mistake in my opinion. They only very weakly adapted to the reality of the situation and tried to stay on an unsustainable course.

When you combine all of these things, 10nm is an expensive nightmare that only really would have worked if they could have gotten it to market a few years before EUV. Unfortunately, or fortunately if your 10nm node is a non-viable money pit, EUV is here.

I'm hoping the cobalt stuff pays off longterm on future nudes. It's actually pretty cool. It was probably part of their delays, because other fabs noped out of using cobalt like Intel is doing with their 10nm process.

I also get that it doesn't answer your question of exactly what went wrong. I'm not sure if the public knows exactly what went wrong with getting the 10nm fab production ready. Intel never hit good yields, and they have pretty much abandoned the node - to the point where they're just going to try and run whatever they can from a fab or two and are backporting some locations to 14nm and converting other locations to 7nm.

Khorne fucked around with this message at 04:44 on Jun 20, 2019

Khorne
May 1, 2002

lDDQD posted:

How are Blizzard so poo poo at programming? It's not like they're a small indie studio that can't attract top talent.
That's the problem. Throwing more people at things creates new problems and decreases quality in spots that require savant-like insight into the final product.

SC2 fps doesn't dip much. I play(ed) it on a 3770k and never had fps dips that impacted playability even in team games. Admittedly, I haven't played since the first expansion so I don't know if they did anything dumb since then.

Parallelizing something like the SC2 engine requires designing it as parallel from the ground up. It's likely SC2 started development in the pentium4 era, aka no multicore. It wasn't on their radar, and it's not really something you can go back and add in later to a complex lockstep simulation.

Khorne fucked around with this message at 16:18 on Jul 3, 2019

Khorne
May 1, 2002

Otakufag posted:

I could have gotten a 8700k 2 years ago before the shortage and price increase. gently caress me for buying all that reddit amd hype.
You and me both. They were $289 at microcenter with another $30 off motherboard at one point.

Khorne
May 1, 2002

Otakufag posted:

According to this leak of Intel's Comet Lake desktop lineup, 8 Cores will cost $339 and 6 Cores $179. Do you think it's fake?
I hope not, because AMD's lineup can receive massive price drops and still have nice margins. If it's real, we could see a sub $400 3900x and a $250 3700x in response provided Intel can pump them out at a notable volume.

I'm mildly confused about how wccftech gets the 9900k's price wrong, though. It's $440-$450 now in the US.

I think it's probably fake. Intel just recently announced the 9900ks and it's not even out yet. At the very least, I wouldn't expect something like this until sometime in Q2 2020. At that point AMD is preparing for the 7nm+ zen3 launch.

Khorne fucked around with this message at 05:48 on Jul 10, 2019

Khorne
May 1, 2002

eames posted:

It's also worth remembering that Zen 2 was the last design that Keller was officially involved in and he's now working at Intel.
Most of the Zen engineering work wasn't done directly by Keller, though.

Khorne fucked around with this message at 15:32 on Jul 11, 2019

Adbot
ADBOT LOVES YOU

Khorne
May 1, 2002

DrDork posted:

I'm sure it's a more expensive than the more pedestrian 2400Mhz RAM, but there's no way it's costing $100/GB, either. Offering a 4GB version instead of just bumping the base price by $50 or so might buy them some additional sales, but I wouldn't want to be using anything specced out like that.
Welcome to OEM laptop pricing. Lenovo was charging $300 to get a 250gb ssd last year when you could buy a better one for $50. For boring old RAM they often charge $150+ for an 8gb stick you could buy for $32 yourself.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply