Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Yaoi Gagarin
Feb 20, 2014

Are Jaguar cores based on bulldozer? Everyone thinks that consoles have "8" cores, but if they're actually terrible CMT cores then a quad core ryzen would an upgrade. But I'm sure most people would be confused about why the core count "downgraded"

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

Perhaps in his place a new engineer will ryze

Yaoi Gagarin
Feb 20, 2014

SwissArmyDruid posted:

...how is that loving _entry level_? Isn't entry level supposed to be some 10-core part with SMT, clocked up to 3.4 with boost to 3.8?

I think it means the cheapest 16-core, not the cheapest out of any threadripper.

Yaoi Gagarin
Feb 20, 2014

Risky Bisquick posted:

Conspiracy theory: Mac Pros are being held back to plop in Threadrippers. They could increase their margins and increase the price to the consumer :monocle:

I agree

Yaoi Gagarin
Feb 20, 2014

Paul MaudDib posted:

Subtracting performance from Intel's benchmark results is bullshit which completely invalidates all of those results. And just to be clear, they are cutting the performance of the Intel chips almost in half. That's an immense level of bullshit right there.

Yeah, ICC is good at extracting performance from Intel chips. But that's real performance that you would have in real-world usage. It's AMD's responsibility to push patches into LLVM or GCC or write their own compiler if they think they're leaving performance on the table.

Next up: AMD doctors the Vega benchmarks because NVCC emits better PTX than OpenCL :qq:

And if they really want an apples-to-apples comparison they could run all the benchmarks with gcc and use those numbers. Arbitrarily cutting numbers is so hilariously bad that I can't believe even the worst marketing idiot would expect it to fly.

Yaoi Gagarin
Feb 20, 2014

I much prefer Ryzen, Threadripper, and EPYC over "7th Generation Intel yawnzzzzz..."

Yaoi Gagarin
Feb 20, 2014

JnnyThndrs posted:

:agreed:

Ryzen and especially Epyc are pretty bad, but Threadripper is so cheesy it's awesome. I'm old and remember 'Pentium' being a new thing, and it sounded pretty terrible until people got used to it.

I was looking at some computer parts with my dad about a year ago and when he saw the Pentium name he was really surprised that Intel would still be selling such an old chip

Yaoi Gagarin
Feb 20, 2014

Cygni posted:

i mean all the good motherboards let you turn the LEDs off and they cost pennies, thats why they are on everything. i dunno if its too worth getting worked up over, especially when unironically buying a cpu named _-~+=ThReAdRiPpEr=+~-_

old people get angry at diodes

Yaoi Gagarin
Feb 20, 2014

a funny thought occurred to me: assuming AMD has a policy of only using their own CPUs in their workstations, their employees were probably extremely happy when ryzen hit production

Yaoi Gagarin
Feb 20, 2014

you can read this for some information from back then: http://techreport.com/review/21813/amd-fx-8150-bulldozer-processor

here's a paragraph from the conclusion:

Scott Wasson posted:

Faced with such results, AMD likes to talk about how Bulldozer is a "forward-looking" architecture aimed at "tomorrow's workloads" that will benefit greatly from future compiler and OS optimizations. That is, I am sure, to some degree true. Yet when I hear those words, I can't help but get flashbacks to the days of the Pentium 4, when Intel said almost the exact same things—right up until it gave up on the architecture and went back the P6. I'm not suggesting AMD will need to make such a massive course correction, but I am not easily sold on CPUs that don't run today's existing code well, especially the nicely multithreaded and optimized code in a test suite like ours. The reality on most user desktops is likely to be much harsher.

scott was right

Yaoi Gagarin
Feb 20, 2014

Risky Bisquick posted:

The current AM4 CPU and B350 board will work, this does not mean new cpus will necessarily work without a different board/chipset. See Intel.

AMD is historically pretty good about this though, and I believe they've said AM4 will be supported through 2020. Bigger question is whether there will be any CPUs that actually warrant an upgrade in that time.

Yaoi Gagarin
Feb 20, 2014

FaustianQ posted:

Zen2 is most assuredly going to be AM4, and will likely be very worth it, ignoring any potential gains AMD might get from Zen steppings in 2018.

Oh I have no doubts that it will be good, but will it be good enough that if you spent $200 or $300 Ryzen today, that you would then get your money's worth out of another $200 or $300 Zen 2 chip?

Yaoi Gagarin
Feb 20, 2014

EoRaptor posted:

Compilers have gotten better (In fact, a lot better, LLVM was/is a major advance in compiler design), however it turns out what itanium needed from a compiler was perfect knowledge of all possible operations an application could perform and all the CPU states that would result, which even with very simple code seems to be an NP hard problem. Also, compilers are still written by humans and aren't capable of the level of 'perfection' needed to even approach what itanium demanded.

In the end, itanium wasn't even a good CPU design. It didn't scale well in speed or performance, and was a dead end for most CPU applications, which are dominated by end users.

I agree with what you're saying about itanium but I'm curious why you say LLVM is a major advance in compiler design. To my knowledge its backend isn't as good as GCC or (some) commerical compilers

Yaoi Gagarin
Feb 20, 2014

Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting

e: besides it having bizarre super weak memory ordering

Yaoi Gagarin
Feb 20, 2014

Cygni posted:

hasnt X299 had NVME raid since launch, and people were dogging AMD for not having it on their premium platform?

not like normal humans would do NVME raid, but HEDT really isnt the platform for normal humans

Intel wants you to pay 300 dollars for a raid key to use it. AMD giving it for free

Yaoi Gagarin
Feb 20, 2014

SwissArmyDruid posted:

Was only kidding, because that's almost word-for-word a tweet from him.

Still though, what is it with KVM that keeps it around, specifically? Xen and QEMU and the one other one whose name I can't remember are all working fine now, aren't they?

Aren't qemu and kvm related? I thought qemu by itself only does emulation, and you need kvm to use hardware assisted virtualization

Yaoi Gagarin
Feb 20, 2014

Ryzen's memory controller ages like fine wine

Yaoi Gagarin
Feb 20, 2014

Paul MaudDib posted:

Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor.

Which, again, is why AMD's marketing is deceptive here. Regular users are better off using GPU encoding due to substantially lower impacts on framerate, ~*professional streamers*~ are better off getting a dedicated machine to handle encoding.

Seems hard, you'd need a dedicated machine that can really rip through several threads at once

Yaoi Gagarin
Feb 20, 2014

bobfather posted:

I apply TIM to a CPU not in the socket.

But I also don't just put a rice-sized blob of TIM in the middle of the heatspreader and hope the heatsink will spread the TIM for me.

My steps to applying TIM:

1. Remove CPU from socket
2. Clean old TIM with a lint-free cloth with a bit of rubbing alcohol
3. Apply rice-sized (or for Ryzen more like pea size) blob of TIM onto middle of heatspreader
4. Use a plastic spreader (I like to use old credit cards) to spread an even layer across the entire heatspreader
5. Install in socket, install heatsink

Edit: the only reason I do it this way is that I don't trust the heatsink to spread the TIM for me. If anyone can convince me this is a fallacy and my thinking is wrong, I'd have no trouble switching to just putting a blob of TIM on a socketed CPU.

Why don't you try the heatsink spreading method on a chip, let it do a few hot/cold cycles, and then take the heatsink off and look at the paste

Yaoi Gagarin
Feb 20, 2014

Generic Monk posted:

ps4 pro gpu is roughly equivalent in perf to a gtx970 which is still a solid 1080p60 card

I'm gonna need you to source a benchmark on this just because I have a 970 and don't want to believe you

Yaoi Gagarin
Feb 20, 2014

repiv posted:

The closest thing to the PS4 Pro GPU on PC is the RX470 - they're the same architecture, the RX470 is a little faster but the PS4 Pro has bolted-on 2xFP16 support so they're probably pretty close in practice.



Dang, that's kind of alarming. I'm unused to console hardware being this close to PC performance.

Yaoi Gagarin
Feb 20, 2014

FaustianQ posted:

Ryzen was originally delayed because it's was so fast it managed to get an overflow error on the core frequency. Ryzen 2 will be noticeably slower in absolute terms but faster practically.

Jim Keller was a big Dragon Age Origins fan

Yaoi Gagarin
Feb 20, 2014

Paul MaudDib posted:

If it didn't need ridiculous RAM timings it wouldn't be a problem. 3000 or 3200 isn't particularly expensive unless you need the B-die.

I suspect cleaning up the IMC will be easier than un-gearing IF from the RAM clocks, and when all is said and done it gets you to the same place. I have to assume that it was engineered that way for a reason and it's not going to be as easy as internet commentators seem to think it will be.

Come on, it can't be that hard. Just throw some FIFOs here and there and sacrifice a few more goats to Jim Keller and I'm sure it'll all work out

Yaoi Gagarin
Feb 20, 2014

Paul MaudDib posted:

It's not really about price, it's about wasting slots and x16 lanes and screwing up your airflow for something that can be integrated onboard via the PCH. Tons of slots+lanes is one of the big selling points of X399 and you're screwing that up right off the bat. Minor differences in cost don't matter all that much on a $1300 build - even if you could buy a 10 GbE NIC for $50 today, I'd rather drop the extra $50 on the motherboard and have it integrated. The fact that the Zenith Extreme is more expensive than the Fatal1ty is the final nail in the coffin IMO.

And yeah if you don't care about 10 GbE and will never care about 10 GbE and can get a board for <$300 (again, like the MSI X399 SLI Plus) then by all means go for it, but most boards cluster around $350-400 anyway, and at that point the Fatal1ty is a no-brainer for $20-50 more IMO, it's easy future-proofing.

But then I'd have a motherboard named "Fatal1ty"

Yaoi Gagarin
Feb 20, 2014

PerrineClostermann posted:

....Modern overclocking is a risk? I thought it was just clever marketing for higher clocked chips. "Oh, you're totally pushing it to the edge! You're the one bringing the chip to its maximum potential! Isn't that amazing and worth 100 dollars?"

The real trick is that it allows chip manufacturers to benefit from performance numbers that they don't actually have to support. Pretty much all coffee lake CPUs should make it to an all-core 5 GHz... But if they don't, it's not Intel's problem

Yaoi Gagarin
Feb 20, 2014

Potato Salad posted:

Wouldn't have happened if it was ARMed

Yaoi Gagarin
Feb 20, 2014

SlayVus posted:

x64 is AMD technology. To run x64 you need AMD services.

I don't believe you re: the second sentence. I have seen embedded operating systems running on x64 that most definitely do not have any "AMD services" running

Yaoi Gagarin
Feb 20, 2014

Threadviscerator

Yaoi Gagarin
Feb 20, 2014

SwissArmyDruid posted:

"We can't shrink the node anymore" is bad. But that's not the problem that we're facing.

The problem that the industry is facing is "We can't shrink the node anymore on SILICON."

Honestly, it's amazing that we've even managed to get to this point on the one single element since the beginning of the semiconductor era.

I mean, using something that's not silicon requires reinventing everything we know about manufacturing doesn't it? That's a tall order

Yaoi Gagarin
Feb 20, 2014

ufarn posted:

I'm planning on getting Zen+ - still can't figure out which of 27/2800(+) are the way to go - because the whole Spectre/Meltdown thing soured me on Intel.


Curious, what about that soured you on Intel

Yaoi Gagarin
Feb 20, 2014

I wonder if threadripper 1 CPUs will get cheaper when TR2 is out. Might be a good opportunity to build a modern-ish many core server on the cheap

Yaoi Gagarin
Feb 20, 2014

If AMD isn't able to get clocks much higher in the next couple years because of process problems, I wonder if they should go all in on the slower but wider idea. Add more execution units to each core and put in 4-way SMT instead of 2-way. I know IBM made some Power chips that do that, and I think there's an ARM server chip that also has 4-way SMT.... it could potentially be an easy way to get a lot better in some server workloads.

Yaoi Gagarin
Feb 20, 2014

PC LOAD LETTER posted:

A GF fab guy said that they're expecting 5Ghz range chips with their 7nm process although he didn't specify which chip exactly he was talking about its generally accepted he meant Zen2 FWIW.

Oh gently caress I stand corrected. GloFo making a good process is :psyduck: but I'm hyped

Yaoi Gagarin
Feb 20, 2014

SwissArmyDruid posted:

It's Semiaccurate, and therefore sits firmly in the same category of "salt now, so you're not salty later" as WCCFT, but: according to this, that thing that I was worried about, where people just upgrade to the next thing that Intel comes out with out of inertia may not be happening.

https://semiaccurate.com/2018/05/22/intel-customers-arent-buying-new-offerings/

A point highly belabored by current Epyc marketing, it seems:



https://www.servethehome.com/amd-this-is-epyc-campaign-and-amd-epyc-updates/

Oh my God :allears:

Yaoi Gagarin
Feb 20, 2014

ET was my jam in middle school / high school. But I played it on whatever lovely CPU/iGPU was in the Sony vaio I used at the time so I don't think I got much more than 30fps, if that

Yaoi Gagarin
Feb 20, 2014

ufarn posted:

Can someone quantify "thousands of 300mm wafers" for me?

Probably somewhere between 50-100x that in CPUs. I think a wafer makes about that many?

Yaoi Gagarin
Feb 20, 2014

It should help with battery life if nothing else. Batteries aren't getting much better and the bigger, brighter, higher resolution screens suck more and more energy. I'm sure somebody somewhere wants to put HDR on a phone too

Yaoi Gagarin
Feb 20, 2014

No that's exactly my point. The display sucks more and more power but the battery can only be so big, so it's the CPU and other electronics that need to get more efficient.

E: in other words, making your silicon more power efficient isn't something you do to make battery life go up, it's something you do to make it stay the same or go down less

Yaoi Gagarin
Feb 20, 2014

Lambert posted:

It's pretty clear the whole "Saddam buying PS2s to launch missiles" story was complete fabrication by World Net Daily. The PS2 wasn't even good at that type of calculation. Also, by 2000 encryption export restrictions were all but gone, but there was an embargo on Iraq.

Are encryption export restrictions actually gone? At my last job we got an annual email that went something like "if you sell/give any products to anyone check with legal first because ITAR yo"

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

fishmech posted:

Yeah that's what they'd been upset about. Sony gave very little warning to the public that they were going to stop allowing PS3s to use the Linux feature. Sony even ended up in several class action lawsuits over it for the following years, one of which resulted in payouts to most people who had bought PS3s before the cancellation of "other os" functionality and could prove they had done so.

Essentially Sony had just dropped an announcement at the end of March 2010 that from the early April 2010 update onward, Linux installation would be blocked, and that consoles would start shipping with the Linux-blocking firmware pre-installed very shortly afterwards. Among other things, this happened while the Air Force was still building up their cluster of PS3s, and in the early planning stages of many other projects.

Did they say why?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply