Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I think it'd be nice if people had options for new ECC home servers that didn't entail overpaying for aftermarket server-chipset motherboards or buying a microATX prebuilt from Dell/Lenovo/HP. I like my Poweredge T20 and all but I wish I could have gotten something with a standard ATX power supply and more drive bays for less than twice the cost.

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I think that usually the idea is that whatever you're doing on a "workstation" is important enough that your output shouldn't be subject to soft errors that might not be detected. This is because to some people the term carries the connotation of doing protein folding or CAD or video editing or some such important poo poo, IDK.

Your particular circumstances will of course affect your chance of soft errors happening at all, changing something important and/or going unnoticed.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah - my understanding is that Nehalem is the closest thing to a new design we've gotten from Intel in the past 10 years, introducing integrated memory controllers and PCIe lanes with QPI to replace the old Northbridge/FSB setup that persisted through Core 2 and reintroducing HT which had been absent since Netburst went off the market. Sandy Bridge, Haswell and Skylake were all refinements to that design. (Lynnfield/1156 was actually a step back, just for cost reasons I think - the memory controller was on-package but not on-die.) Whatever follows Cannon Lake may be a new design, if Intel thinks they're nearly out of tricks to squeeze more performance out of what they have.

I am not sure if (but could totally believe that) the pipeline of Nehalem was still based on Core 2 and if you go far enough back P3 as well, so I'm not sure at what point you can point to a chip and say it's a "totally" new design. Maybe the Atom qualifies? I know that a lot of the idea behind it was "Make a chip as simple as the original Pentium but running as fast as we can get it these days, plus any improvements which give you at least a 2:1 return on performance:wattage."

Eletriarnation fucked around with this message at 15:22 on May 27, 2017

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

LoopyJuice posted:

It also never goes over 60C under full GPU/CPU load on a custom water loop whereas my old i5 used to hit almost 80C for reasons unknown (were 2500k chips soldered or TIM?) whilst the GTX 970 was hovering at about 45-50 under load in the same loop.

They're the last generation of Intel quad-core to be soldered. What cooler did you have on the 2500K? I never see 70C with mine at 4.4GHz/1.38V under a Hyper 212+, but if I were to put enough juice through it to get 4.6 stable I might be in the same place you were.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, I think that would be the "just enough to light up the platform" chip for people who need as many GB of memory and/or PCIe lanes as they can get at a given price point and don't care about the actual CPU performance much e.g. Xeon E5-2603.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Malcolm XML posted:

AMD breaking 15 on afterhours

Yeah, this was what I came here to mention - their stock is at like a 10-year high right now. Perfect time to sell the 300 shares I bought at $9-10 and fund a Zen+ or Coffee Lake rig with the delta.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Paul MaudDib posted:

This seems like good timing because unless AMD is sitting on a major reveal then Vega is just a disaster, and it's going to be publically called out as such in reviews. Especially given how much AMD seemed to be implying that Vega FE wasn't representative of RX Vega's performance. When it comes down to it, AMD and their fanclub have been heaping a lot of poo poo on reviewers (all of whom went out of their way to get samples with their own money) with the whole "it's not a gaming card!" schtick. And when it turns out that no, RX Vega doesn't come with a pony and free blowjobs then reviewers are gonna have their told-you-so moment.

Early reports from Finland and Sweden put the MSRP for Vega XTX (with AIO) at 850 EUR before VAT. They don't have the performance and if they price it appropriately the margins are going to suck. Either way it's a tremendous fail from a financial standpoint.

I mean, AMD is more than RTG obviously, but having a brand new very expensive GPU be a steaming turd doesn't seem like it will help stock prices. It seems likely they will tank after the launch and eventually recover, but if you want out then now seems like a good time to do it.

I suck at stock trading though so you're probably better off not listening to me :v:

From having watched their stock performance for the past year since buying the shares, I mostly agree - I think you could do a lot worse than just buying after GPU news and selling after CPU news.

I bought 300x at $9 back when Ryzen 7 was incoming but still unreleased based on the strength of the rumors, thinking "if the hype leaks have this many actual strong benchmarks attached then they should do OK, unless everyone's just lying." Sold 100x at $13.50 right before release to lock in some gains in case it was a horrible bomb, and was tempted to sell the remaining 200 when they briefly spiked to $15 following release but waited too long. When they dipped back down to $10 I thought "they're worth more than this, Ryzen's strength implies a lot of potential with Epyc and selling server CPUs will make them rich even if Vega is a bust" and bought back the 100x I sold at 13.50. I had been watching this week already for any reaction to the Threadripper announcements, but forgot about the quarterly financials and of course that's an even bigger deal. Now, selling all 300 at $15.30, I think my total gross profit from the whole adventure is $2240 if my math is right. Once consumer Vega hits and is so terrible it knocks them back down to $10-11 maybe I'll try to repeat it.

Of course, I also don't actually know anything about trading but I suspect that describes most people who do it. At least by knowing my ignorance I'm one step up, as Socrates would tell us.

Eletriarnation fucked around with this message at 19:51 on Jul 26, 2017

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Mr Shiny Pants posted:

If someone would be so kind to answer this, I would be much obliged.

If the CPU takes ECC, do I still need to figure out if the Mobo supports it?

God, it's been ages building my own system........

Yeah, even with integrated memory controllers there's usually some kind of hook required in the chipset to make it work - at least with Intel. I've put a Xeon L5520 in an X58 board and an X3440 in a P55 board and in both cases, the processor supports ECC but there's just no way to enable it even with the right DIMMs inserted.

I feel like with Ryzen there were some consumer boards (from ASUS, at least?) which had the ability to run in ECC mode, but I don't know how common a feature it is. AMD's official position is that the feature hasn't gone through full validation testing like it would on server chips but as far as they're aware there shouldn't be any problems with a board that supports it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Gwaihir posted:

Apparently (or, going by the anandtech article) the deal with OEMs constantly doing the single channel ram on laptops thing is (It's by no means an AMD only thing btw), selling notebooks with the ability to upgrade the ram via only buying one chip/popping it in is a serious market requirement. And since 4 slot boards are reserved for gaming machines/workstations which always come with dual channel anyhow, that means just shipping with only one slot populated.

That, and on the laptops where it always shows up, single channel isn't bad enough for users to really notice.

Yeah, I think this would be a lot better if they just improved the messaging. I'm delighted by the idea of buying a system which has 1x8 already and I can just pay aftermarket price on the remaining 1x8 to get dual-channel 16GB. I don't want to buy a system which can never get dual-channel though, and if I see "single-channel" next to RAM with no other explanation then I might assume that it's actually permanently disabled like Carrizo-L instead of just not the shipping configuration.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
It's for larger TDP chips than the 15W that Ryzen mobile is sitting at, I believe... more of the "mobile workstation" kind of thing.

Would make a killer SFF desktop too, good Skull Canyon successor.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If you don't want your money going to big corporations, just buy all your poo poo used? :shrug:

I mean, yeah, no ethical consumption under capitalism etc. but it's the best you got, I think.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

A SWEATY FATBEARD posted:

Yeah that's what's confusing me as well - well since both the keyboard and the mouse are wireless, I suspect that the USB dongles are picking up neighbor's keyboard signal or somesuch. No harmful interference though. :)

edit: I also have a Bluetooth dongle, is the mainboard BIOS "smart" enough to try to connect to BT mice, even though the mouse might be in a different apartment altogether? :)

Are they on separate receivers? The receivers may both be identifying themselves to the system as KB+mouse combos.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
To me the thing that's least believable about it is that they'd come up with a 12-core 5.1GHz chip, which would be able to murder anything Intel is putting out for under four figures, and then they'd sell it for $450. Like... I'd stand in line to pay $700 for that and I'm a cheapass who's been sitting on a 2500K for 6 years looking at new things and going "eh... not good enough yet."

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I love that one of the exploits requires a BIOS reflash, which for me is beyond even physical access in the realm of "if you can do this, can't you do whatever you want already?"

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

FaustianQ posted:

Phenom IIs are still okay general use processors, but Core2 is so much more common and cheaper if you just need a computer that powers on.

I agree with this in general but there's kind of a glaring problem now in the fact that Core 2 has a hardware security vulnerability that Intel has said they are never going to fix. May be overly cautious, but I'm looking at my E8600-based HTPC and thinking it might be time to replace it with something newer.

NewFatMike posted:

I'd be curious what you can do on an mITX B450 board with a 1700 or so. Do you get any better things? Super small form factor stuff rules.

Is there going to be a B450? I have only heard X470 announced so far.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Klyith posted:

For a HTPC? If you're using an OS that gets patches to fix the Meltdown exploit, that really should be sufficient for home user security. Meltdown is the one that can read your memory in real-time using javascript on a webpage.

I could definitely be mistaken but my understanding is that Spectre also allows reading the memory contents of whatever process gets compromised (of course, a big deal for a browser) and that it has two variants, one of which must be mitigated in firmware and the other in software. It seems like a difficult exploit to use effectively, especially if you're trying to undermine a lot of disparate systems instead of one particular well-defined target, but I'm not sure if that's good enough for me to ignore it.

I'm not running out to buy anything right now, just thinking that I might move up the replacement of my old X3440 workstation so that I can slot it in as a newer HTPC.

My Core 2 home server, I'll probably leave alone for now since the only way it communicates with the outside world is through Deluge, yum and Plex and I don't see any of those being a big concern.

Eletriarnation fucked around with this message at 20:36 on Apr 14, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

B-Mac posted:

That user review seems better than most of the site reviews.

Yeah, there's a pretty impressive level of detail. I'm particularly struck by the part about Pinnacle Ridge using a lot more power than advertised and the 2700X really being a 140W part. I would be curious to see similar graphs for wattage required at various performance levels for Raven Ridge, since I've been considering getting a 2200G to try underclocking it. Wattage figures for Ryzen Embedded suggest that it could get close to Core M territory and still deliver enough performance for a home server, but I haven't seen any mobos with the embedded version yet.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Comparing the 2400GE to a 2700U, I'm having a hard time seeing why you would use the former in a laptop unless it's substantially cheaper. Presumably the configurable TDP would allow a 2700U to turbo up to similar performance if set to 25W, and all the specs look the same except for the Vega IGP having 10 CUs in the 2700U vs. 11 in the 2400GE. I definitely expect to see the 2400GE in AIOs or SFF desktops though.

e: I'm actually a little surprised that AMD isn't teasing versions of 6-8 core Ryzen for mobile workstations. My guess would be that they either don't think there's sufficient demand or that they are having difficulty producing a sufficiently compact version of the package, but I don't really have any idea.

Eletriarnation fucked around with this message at 16:49 on Apr 23, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Munkeymon posted:

My work laptop has a Xeon badge because what do words even mean anymore I guess?!

Xeon hasn't meant much for over a decade though, what's new?

Like, you don't even need it for ECC support if you go old enough; my 875P motherboard is perfectly happy to run 4GB ECC DDR1 with a Pentium M in it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I don't think there's an easy formula for something like that, but if you compare the 1950X to 1800X and then look at how much improvement there was from 1800X to 2700X I wouldn't be surprised to see a 2950X that can turbo to 4.5.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Happy_Misanthrope posted:

So an 8 year cycle, if you don't count the more powerful version of a machine designed specifically so the platform doesn't seem so dated

Yeah, my thought is that the PS4 Pro is more like a successor that has backward compatibility than just a slim-down like some previous console rev. 2s have been. There's an implied commitment with releasing that expensive new hardware pitched at current owners that it will continue to get releases for several years before being obsoleted.

As far as I'm aware the console itself doesn't typically have great margins compared to the games anyway.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

FaustianQ posted:

Yeah, it's absolute garbage and it's just mindblowing they think it's acceptable. It's also in a 35W configuration, so it's basically running below UHD 630 performance. It's super dumb and I can't figure out how they think it'll sell at that price. To be special it'd need to somehow fully use the IMC to feed the 512 GCN3 ALUs, it'd have to be capable of running both DDR4 and GDDR5 simultaneously, and it'd need an adequate cooling solution to run at full speed. Instead it's just an obnoxiously expensive, underpowered toy.

Now, this on the other hand might be worthwhile https://www.kickstarter.com/projects/udoo/udoo-bolt-raising-the-maker-world-to-the-next-leve



This is interesting but it's a little odd because they're positioning it as a maker board when it's very nearly the size of mini-STX and could just be called a SFF motherboard. With that much power I'd really like to have more I/O like PCIe or TB3, and from what I recall the V1605B model which they're using for their more expensive variant has built in 10GBe so I'm a bit surprised they aren't taking advantage of that.

Honestly, a Ryzen Embedded board would be an incredible basis for a low-cost NAS what with having on-package integrated graphics, fast networking, ECC support and lots of available I/O to attach storage controllers.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If I recall correctly Epyc already uses sTR4, so they probably could straightforwardly hack together a 32-core "Threadripper" which would just be a full four-die Epyc instead of the 2-die+2-shim thing they're doing now. My main concern with that would be increased NUMA issues - we already have two channels connected to one die and two connected to the other, are we going to come up with a new memory controller design that connects one channel to each die or are we going to just tack on two new dies and force them to go through the first two for all memory accesses? I probably don't need to elaborate that each of those two approaches has some nasty performance implications.

The alternative would be to use a >8 core die but that doesn't exist yet and I don't know that it's actually a good idea to go past 8 cores on the mainstream models yet. You could also go from four memory channels to eight but then you're just straight up selling Epyc as a consumer product and you'll need all new motherboards, I assume.

e: 1 die != 1 CCX

Eletriarnation fucked around with this message at 18:18 on Jun 5, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, so they're going with this:

Eletriarnation posted:

are we going to just tack on two new dies and force them to go through the first two for all memory accesses?

Since the penalty for indirect access in 1st gen Threadripper from what I remember is roughly double memory latency, I guess I can buy their argument that it's not necessarily that crippling. Benchmarks will tell, at least. I'm more curious to see if there are new 2-die models and if their clocks increase significantly.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I assume that would only be a cost savings if there's still a substantial yield of dies which have more than one unviable core on a CCX and can't therefore be made into Ryzen 5s. Even those could go into the 1900X since it's 2 dies, each with 2 CCXes, each with 2 cores, but that's probably not a very popular part.

Otherwise they are cutting down more perfectly good dies and having to use a bigger interposer to connect them, I think.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
In an OS written to be NUMA-aware applications can call an API to request a batch of threads to be located in the same node, do a memory allocation on a specific node, figure out which nodes have how much memory adjacent, etc. At least, that's how it seems to work in Windows according to this page I just found from Microsoft: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363804%28v=vs.85%29.aspx

As an end user, you can also set core affinity for processes yourself if you really want to fine-tune it that much. I'm not sure if there's a way to manually manage memory allocation like that though.

Eletriarnation fucked around with this message at 21:17 on Jun 7, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Khorne posted:

They're all the top ~20% of 8700k chips that intel took off the market silently over the past however many months, screwing anyone who bought it for a high clocked single core processor.

I understand the principle behind the resentment, but there are only 50,000 units of the 8086K - is it actually a meaningful proportion of the total, let alone 20%?

And yes, the thermal interface is the same as the rest of the bunch - they aren't soldered or liquid-metaled.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Khorne posted:

I didn't realize they aren't even going 5.0GHz on all cores. Sorry about that. Isn't a 4.3 turbo on all cores really lackluster?

Every single turbo step on the 8086K is exactly the same as the 8700K except for the single-core one, which is 5.0 instead of 4.7. It's a marketing gimmick or to be charitable a limited-edition anniversary label, not an actual upgrade.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah - from Anandtech's page, the 1920X is (3+3)+(3+3) and each die has the full cache available. Curiously it looks like the 1900X is (2+2)+(2+2) but has half the cache disabled.

The 24-core may just double that and have 4 dies with 6 cores each, but as far as I know there's nothing stopping them from using 3 full dies and only having one without a direct memory controller connection. Seems like that design would have superior performance for most purposes and the only reason to go for 4 cut-down dies instead would be if AMD has so many dies coming out with 6/7 working cores that they need a way to use them up other than 2600[X]s and 12-core TR2s.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I'm pretty sure in practice between improved turbo and IPC the 2000-series chips are around 110-115% of the per core performance of 1000-series, so it's just a question of whether you're using the extra 2 cores vs. how much difference that 10-15% would make.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
You can be confident enough to say you think something will happen without wanting to gamble on it with a toxx, especially considering there's no upside to toxxing if you're right. It's not like he's saying that it's absolutely certain or asking you to gamble on it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Looking forward to quite likely replacing my X5660 with a 3700 when Zen 2 launches, if they can get within a few percent of 9900K performance for less heat/cost .

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
It might be an issue to use them at 4.0 speeds, but presumably they'll still work fine at 3.0 which will still be fast enough for many purposes for a long time.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

orcane posted:

Technically it would be. But apparently the B450 boards aren't built (although I'm sure that's being way overblown again) as well as the good X470 ones. Since they also don't save you much or any money, I'd go with the X470.

I watched the Buildzoid video about this and most of what he was pointing out that all of the boards are 4-phase at most for vCore regulation (and some of them poorly cooled, too) and some have only one phase for SoC regulation. His main concern was that a 2700 or 2700X trying to do XFR would potentially run into issues with the 4-phase vCore. The SoC VRMs are a lesser issue but he said it would still probably not be a good idea to overclock a 2400G on those boards since the larger Vega in that model can pull a surprising amount of power under increased voltage.

A six-core or a 2200G would be just fine as long as you have reasonable expectations for the kind of OCs that a budget motherboard can get you, and I imagine a 2400G at stock (or overclocking just the CPU) would be OK too.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
It seems unlikely that we will see a 16C that can hit a typical turbo of 4.9GHz, a thing that currently does not exist for any price, for $500. If AMD is clocking chips that high I'd expect to see the 8C at least at $350, probably higher, and the 16C double that.

It seems really unlikely that we will see a 64-core Threadripper running at 5GHz base, a thing that is so far off the map currently as to be laughable, for any price let alone the approximate price of the current 4.2GHz 32-core.

e: even if with the die shrink they could get from 4.2 to 5.0 without increasing power consumption you're talking about doubling core count in a chip that already has a 250W TDP so good luck running that without a 360mm AIO or a custom loop.

e2: also yeah why would they entirely axe 6-cores? In general this chart seems out there on core counts, most people don't really have a good use for 8 yet and they're supposedly going to make 16 the new standard?

Eletriarnation fucked around with this message at 15:45 on Dec 3, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
AMD needs market share but they also need to stay in the black since they don't have a giant war chest built up from previous successes. In general I don't find these clock speed/core count combinations to be believable period - like I said before, 64 cores at 5.0GHz would be expected to use an absolutely insane amount of power - but even if they're possible, selling chips that crush Intel's for half of what Intel would charge sounds more like a self-inflicted wound than just aggressive pricing.

Most of the chart should probably have the supposed MSRP doubled to be realistic, and at the top end we're just looking at sheer impossibilities that would still fly off the shelves at 5 times the prices shown. I mean really, go look at how a $10000 Xeon Platinum 8180 compares to the specs of that supposed 3990WX.

I'd love to be wrong since I'm sitting on 250 shares of AMD stock I bought over a year ago, but this poo poo seems like something between a hoax and a fever dream.

Eletriarnation fucked around with this message at 17:21 on Dec 5, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Right, but you're talking about a thin edge if any there (28->32 cores, 3.8GHz boost->4.2 with lower IPC, higher TDP on the TR) with the Xeon/2990WX comparison versus "is substantially more than twice as powerful" with this theoretical 3990. AMD still has to work pretty hard to sell product when the story is "we're cheaper and just as good as the other guys! (who you've been already buying for ten+ years)" but I think "the other guys can't hold a candle to us, period" would be far more compelling and would allow them to charge as much as or more than Intel does.

Regarding Epyc, it's a closer comparison to Skylake-E than TR is in terms of ancillary stuff like memory channels and multi-socket support and I feel the current pricing still supports the point above - they're putting a product out there that is only competitive with Intel instead of solidly passing it, and so as the underdog they have to be very aggressive with pricing to be confident that they can move enough of that product.

Khorne posted:

I'm not convinced the leak is real, but they would be in the black vs hardware cost with those prices. They can afford to not fully recoup R&D from consumer chips when their entire lineup uses the same lego architecture.

Part of why I don't think this is realistic is that it's going to be harder to ask for a premium for a 64-core Epyc if they're selling a 64-core insanely clocked TR with ECC support for bargain basement prices. Sure, some enterprise buyers will need the PCIe lanes/multisocket support/memory channels but if you're primarily concerned about how big of a (cores*MHz) result you can get for a given budget then...

Eletriarnation fucked around with this message at 18:59 on Dec 5, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

sincx posted:

Epyc should still be more power efficient from a performance-per-watt perspective. That's more than enough for data centers to stick with the enterprise chips, given how quickly power bills add up to exceed the purchase price of the chips.

Absolutely true if you just let everything run at stock. Seems foolish though to do that with a 5GHz 64-core if performance-per-watt is a bigger concern than raw performance.

Khorne posted:

I'm not sure this is a concern for AMD. AMD wouldn't let someone like Dell sell TR/AM4 cpus in the way you're describing, and if the Backblaze of datacenters wanted to try and setup threadripper racks no one is going to stop them. TR isn't as space efficient as epyc, and the epyc feature set actually matters for most data center uses. AMD also cuts deals with the big boys and certain institutions. No one pays full retail for things.

Look at the 1080Ti or a few previous generation consumer nvidia cards vs their tesla offerings. You'd save 10x-20x for the same or even slightly better performance and feature set for many compute tasks*. In the data center, and at the enterprise level, you still end up with the Tesla cards because it's what vendors are selling and supporting.

Also, all of the x370/x470 and probably b350/b450 motherboards and zen1/zen2 CPUs support ECC. Just not officially. But it works.

*I'm aware the differences, ECC on the Tesla cards being theoretically a big one, also aware that nvidia has been trying really hard to cripple their consumer line for compute and prevent people from using these in data centers. Some universities have clusters of 1080Ti and earlier consumer GPUs.

I can't really argue with the "well, no one ACTUALLY pays $10000 for that chip" because while I do know that's true, I don't know what they do actually pay to know how much it matters. If the price/performance differential is big enough though between proper server chips and HEDT (or whatever the hell you would call 5GHz 64-cores limited to 1S) the cloud providers at least are going to consider just building whiteboxes and telling traditional vendors to take a hike like they did with Ethernet switching.

Re: 1080Ti vs. Tesla, it's my novice and maybe incorrect understanding that the 1080Ti is perfectly usable as a training/practice card to learn and hone deep learning techniques (so yeah, university labs!) but the various-precision FP limitations do actually cripple it for a lot of serious work, especially if you're a cloud provider trying to sell VM instances to people and don't know beforehand that the Geforce card will be acceptable for what they wish to do. This is ignoring too the hoops that you have to jump through to get a GF card working with a VM; I'll assume that they're not a significant hurdle here. For those reasons I'm uncertain how well of a comparison this makes to TR/Epyc.

I don't think I've said this yet but I also just wonder - why would anyone buy a 64-core TR if not to run a server on it? 32 cores already is a huge quantity for a workstation, and if someone said they "needed" 32 cores for an application I'd already be wondering if that load could be moved to some kind of compute cluster instead of an end user's machine. What would be the actual demand for this chip other than 1S servers and people who don't actually need that many cores but have more money than sense?

Eletriarnation fucked around with this message at 20:44 on Dec 5, 2018

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Seamonster posted:

I'll settle for 8c/16t @ 5.0 for $250. That way, even if architectural IPC improvements come up short, the clock speed will still be there to carry.

Uhh yeah I too would "settle" for a competitor to Intel's top regular desktop chip at half the price.

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
That is true, but I am not sure why it is relevant. While I expect AMD to try harder to compete with Intel than Intel would to compete with their own last generation, I still wouldn't expect comparable product at half the price. It's not like Intel drops their prices anywhere near 50% even a year after launch.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply