Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
The real question is is how much sony is going to pay verizon and comcast for priority on their networks, then we have that whole monthly bandwidth alotment.

edit: sorry this is off topic for this thread.

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Meh, it's not like AMD has much else going on right now, and this is definitely relevant with respect to AMD's HSA, what with the (dedicated?) GPGPU block.

I guess if we have to add some strictly thread-relevant content to appease the topic lords, AnandTech gave the Piledriver-based Opteron 6300 a thorough benchmarking. If you're hardware-cost-bound, they make for attractive servers, but most enterprise/HPC outfits are bound by power or software or consulting costs.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Factory Factory posted:

Meh, it's not like AMD has much else going on right now, and this is definitely relevant with respect to AMD's HSA, what with the (dedicated?) GPGPU block.
Er, dedicated? As far as I've seen, PS4 seems to have a GPU with (at least relatively) standard GCN units. The only interesting thing that HSA would provide is if the GPU and the CPU have some level of cache coherence.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
One of the rumors going around, not sure if it was for PS4 or XboxNumbers, was that there would be a CU set aside for GPGPU programming to take over some of the highly parallel functions in the current CPUs of the PS3 and Xbox360.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Factory Factory posted:

One of the rumors going around, not sure if it was for PS4 or XboxNumbers, was that there would be a CU set aside for GPGPU programming to take over some of the highly parallel functions in the current CPUs of the PS3 and Xbox360.
Unlikely to be static, but I wouldn't be surprised if they expose something similar to OpenCL device fission to allow devs to partition the GPU semi-dynamically.

edit: reason why it won't be static is because since there's a single platform, you could pretty easily either timeslice the entire GPU if you have bulk processing to do or drain a limited number of CUs (known a priori thanks to aforementioned single platform) at specific times if you're doing more latency sensitive work.

Professor Science fucked around with this message at 06:15 on Feb 21, 2013

eames
May 9, 2009

Factory Factory posted:

One of the rumors going around, not sure if it was for PS4 or XboxNumbers, was that there would be a CU set aside for GPGPU programming to take over some of the highly parallel functions in the current CPUs of the PS3 and Xbox360.

I would expect that extra CU to be responsible for background tasks such as the permanent, time-shift-like h.264 encoding and various streaming / decompression tasks. Possibly also some of the image and signal processing for the new Move/Kinect device?

quote:

Lead system architect Mark Cerny also confirmed the upcoming system will have a local storage hard drive, along with an extra chip build into the system with the sole purpose of handling all the PlayStation 4 downloading. Interesting enough about this second chip is the fact you will now not only be able to download games in the background as you continue to play, but a new feature coming to consoles is being able to download and play what your downloading all at the exact same time.

//edit: nevermind, you’re talking about a CU in the APU and not an extra chip. Disregard this post. :saddowns:

eames fucked around with this message at 10:28 on Feb 21, 2013

roadhead
Dec 25, 2001

Professor Science posted:

Er, dedicated? As far as I've seen, PS4 seems to have a GPU with (at least relatively) standard GCN units. The only interesting thing that HSA would provide is if the GPU and the CPU have some level of cache coherence.

Main memory is shared and I bet all the L2 cache (and L3 if there is any) is shared as well.

Endymion FRS MK1
Oct 29, 2011

I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.
Speaking of AMD and gaming, this CPU performance test of Crysis 3 was interesting. Shows an FX 8350 beating a 2600K, and an FX 6300 beating a 2500K. I don't remember if that thread shows it, but the site that did the benchmarks showed the core loads on each CPU, and all threads were filled on every processor. Perhaps developers will start learning how to properly make multi-core games?

Yaos
Feb 22, 2003

She is a cat of significant gravy.
Am I seeing this right, AMD's $200 processor is on the heels of Intel's $550+ processor in Crysis 3? What kind of shenanigans are going on here?

Edit: Ah, but multiplayer is a different story, way behind Intel there. Must mean Crysis 3 really likes it's GPU.

Yaos fucked around with this message at 03:41 on Feb 22, 2013

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

roadhead posted:

Main memory is shared and I bet all the L2 cache (and L3 if there is any) is shared as well.
Shared main memory != cache coherence. This kind of coherence (between CPU and integrated GPU) is one of the big promises of HSA. If I had to make a bet, I'd say the GPU can snoop the CPU L1/L2 but not vice-versa.

Endymion FRS MK1
Oct 29, 2011

I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.

Yaos posted:

Am I seeing this right, AMD's $200 processor is on the heels of Intel's $550+ processor in Crysis 3? What kind of shenanigans are going on here?

Edit: Ah, but multiplayer is a different story, way behind Intel there. Must mean Crysis 3 really likes it's GPU.

If you're pulling the multiplayer numbers from the thread, keep in mind they use the alpha (and the beta I think?), not the final.

Here are the core loads, courtesy of GameGPU:



Nintendo Kid
Aug 4, 2011

by Smythe

Yaos posted:

Am I seeing this right, AMD's $200 processor is on the heels of Intel's $550+ processor in Crysis 3? What kind of shenanigans are going on here?


The Intel 2600k is a 2 year old CPU that's due to be discontinued this quarter, and so is the Intel 2500k. The AMD FX 8350 and FX 6300 are brand new CPUs. Intel never discounts their old chips except when absolutely neccesary.

The shenanigan is: AMD can finally beat 2 year old Intel chips.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Install Gentoo posted:

The shenanigan is: AMD can finally beat 2 year old Intel chips.
That are priced significantly higher.

wheez the roux
Aug 2, 2004
THEY SHOULD'VE GIVEN IT TO LYNCH

Death to the Seahawks. Death to Seahawks posters.

adorai posted:

That are priced significantly higher.

Because Intel never discounts old chips. There are newer, cheaper ones from Intel themselves that perform better.

Nintendo Kid
Aug 4, 2011

by Smythe

adorai posted:

That are priced significantly higher.

Because they're the old chips. Intel doesn't discount old chips, they want you to buy new chips. You can compare AMD chips of any price from 2 years ago to these 2 year old Intel chips and they'll almost always come up short, even ones that were much more expensive.

roadhead
Dec 25, 2001

Professor Science posted:

Shared main memory != cache coherence. This kind of coherence (between CPU and integrated GPU) is one of the big promises of HSA. If I had to make a bet, I'd say the GPU can snoop the CPU L1/L2 but not vice-versa.

I doubt Sony wouldn't push this out without that feature though. This is heavily customized and will have volume in the millions most likely over the next 8-10 years, so I'm assuming all stops were pulled.

Unless you have a PS4 Dev-kit, and then I'd assume the NDA would keep you from posting anyway ;)

Not Wolverine
Jul 1, 2007
I'm kinda thinking that I can buy an 8 core AMD for about $180-200, and I can only buy quad core i5s for the same price. Yeah, the i5 performance per core is better, but I suspect performance per core is probably going to reach a wall kinda like how GHz has reached a wall too. Now that the software we care about (games) are finally looking at multiprocessing, this could be good for AMD, until Intel adds more cores. . . or dumps more $$$ into exclusivity deals. Purely speculation, and I am trying to be more optimistic to AMD, because I still believe! Seriously, I still like AMD, I only own 1 intel CPU because it's in my Eee PC and I have been delaying all my other upgrades because of $$$.

JawnV6
Jul 4, 2004

So hot ...

roadhead posted:

This is heavily customized and will have volume in the millions most likely over the next 8-10 years, so I'm assuming all stops were pulled.

It's not a matter of volume ("millions over 8-10 years" is pitiful) or pulling stops. Cache coherence between heterogeneous compute cores is a Hard Problem. It's entirely possible the complexity of making the agents agree on protocol was far greater than either team could manage.

Not to mention pointless if you can just solve it in software.

roadhead
Dec 25, 2001

JawnV6 posted:

It's not a matter of volume ("millions over 8-10 years" is pitiful) or pulling stops. Cache coherence between heterogeneous compute cores is a Hard Problem. It's entirely possible the complexity of making the agents agree on protocol was far greater than either team could manage.

Not to mention pointless if you can just solve it in software.

It's hard sure, but when you have a stable hardware platform and supporting Compiler/SDK/Best Practices a lot of the variables and unknowns that make it "Hard" go away and it becomes a lot more solvable. I'm betting we get cache coherence on BOTH the PS4 and the Next xbox (when using the AMD supplied tool-chain of course) and that is probably the thing that sold AMD APU to Sony and Microsoft for this gen.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Colonel Sanders posted:

I'm kinda thinking that I can buy an 8 core AMD for about $180-200, and I can only buy quad core i5s for the same price. Yeah, the i5 performance per core is better, but I suspect performance per core is probably going to reach a wall kinda like how GHz has reached a wall too. Now that the software we care about (games) are finally looking at multiprocessing, this could be good for AMD, until Intel adds more cores. . . or dumps more $$$ into exclusivity deals. Purely speculation, and I am trying to be more optimistic to AMD, because I still believe! Seriously, I still like AMD, I only own 1 intel CPU because it's in my Eee PC and I have been delaying all my other upgrades because of $$$.

Here's the thing: a core isn't a "Do only one thing at a time per clock" piece of hardware any more, hardly at all. Though a core takes one x86 instruction per cycle (or two with symmetric multithreading like on a Bulldozer module or Hyperthreaded Intel core), that instruction is decoded into a number of smaller operations that can be executed in parallel on the core's redundant execution resources. And for the life of me, I cannot think of a realistic workload that can take advantage of more cores but also can't take advantage of that instruction-level parallelism.

Further, we haven't hit a GHz wall; we've hit a power wall. Intel and AMD et al. could easily make 10+ GHz processors. Hell, back in 2000, Intel thought we'd be into 12 GHz Pentiums by now. Also, read the comments on that article; they are hilarious. But by and by, everyone discovered that power usage and heat output shoot through the loving roof when you spin clock cycles up that high. Even if you wanted and could power a kilowatt dime-sized silicon hunk, there's no material we can manufacture that can actually cool that; you'd need a superconductor hooked up to a room-sized radiator. So instead, we've gone another way: computing power has increased as predicted, but coming in through parallelized cores, multiple cores, and more complex supplemental instructions like AES-NI and AVX.

So you can't compare a core to a core and say AMD has an advantage for giving you more, because that's not the whole story of how much computing horsepower the chip actually has. So what are you getting into with an 8-core AMD vs. 4-core Intel processor? Lemme gank some AnandTech graphs:



A clocked-up, big-cache Intel quad-core has literally 50% better performance per core, and, assuming that result included max turbo clocks, 62% better performance per core per clock. POV-Ray is a real-world render test. Not all workloads will be THAT different, but many will be. Let's gank something from Tom's:



iTunes' MP3 encoder is single-threaded, so the per-core difference shows up big time. And format conversion in iTunes, you bet your rear end that's a common use case.

But that's not the whole story; even if you do have a properly multithreaded workload, the technical details of working on one set of data with multiple cores introduces a not-insignificant amount of inefficiency:



This varies by the particular workload, but the takeaway is that adding cores is roughly analogous to adding fuel to a rocket: yeah, more lets you go further, but the extra lifting needed to bring that extra capability to where it's useful means your net gain is lower than you might think intuitively.

So what's the total takeaway? For some workloads, sure, those 8 slower cores for less money look pretty nice:



And for other workloads, those 8 cores for $200 are shamed by Intel's 4 cores for $200:





AMD's desktop offerings are just not competitive on raw CPU performance for most users. What they have is good-enough CPUs that match up with things they do better, like GPUs. This may change in the future (maybe, but Haswell could change that to "probably not"), but if you buy AMD's current offerings because they give you 8 cores for the price of Intel's 4, you are very likely making a mistake.

E: Durr, I called Cinebench POV-Ray. I am a dumb.

Factory Factory fucked around with this message at 20:02 on Feb 22, 2013

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
Remember that today's Intel chips are effectively descendants of Pentium Ms which were themselves reworked Pentium III's when the Pentium 4 was found to be just too loving hot for mobile use. The heat/material science wall killed NetBurst and ever since efficiency is the name of the game, not raw ghz. As Factory Factory said, it isn't that Intel or whoever can't make BIG NUMBERS chips, it is that they have no practical usage.

JawnV6
Jul 4, 2004

So hot ...

roadhead posted:

It's hard sure, but when you have a stable hardware platform and supporting Compiler/SDK/Best Practices a lot of the variables and unknowns that make it "Hard" go away and it becomes a lot more solvable.
They also make it pointless since you can just avoid true sharing cases and handle the complexity in sw instead of hw. I don't see you making an argument against that, just asserting that it's not impossible to do in hw.

quote:

I'm betting we get cache coherence on BOTH the PS4 and the Next xbox (when using the AMD supplied tool-chain of course) and that is probably the thing that sold AMD APU to Sony and Microsoft for this gen.
I get the distinct impression you've either got a lot more information or a lot less than I do.

CFox
Nov 9, 2005

Colonel Sanders posted:

I'm kinda thinking that I can buy an 8 core AMD for about $180-200, and I can only buy quad core i5s for the same price. Yeah, the i5 performance per core is better, but I suspect performance per core is probably going to reach a wall kinda like how GHz has reached a wall too. Now that the software we care about (games) are finally looking at multiprocessing, this could be good for AMD, until Intel adds more cores. . . or dumps more $$$ into exclusivity deals. Purely speculation, and I am trying to be more optimistic to AMD, because I still believe! Seriously, I still like AMD, I only own 1 intel CPU because it's in my Eee PC and I have been delaying all my other upgrades because of $$$.

I've seen a little of this kind of talk ever since the PS4 using 8 cores has been revealed but I don't think that when it comes to PC gaming the same line of thought will hold true, at least for the near future. The games on the PS4 and new Xbox will be coded to use all those cores effectively but when it comes to PC porting the developer can't count on having that many cores available and will likely make the games run on 2, maybe 4 cores since that's what the vast majority of PC owners are working with. I just don't think the marketshare of 6+ core cpus is in any way large enough for them to spend extra money supporting it.

Of course this is all in addition to the very good points that Factory Factory has already covered.

Not Wolverine
Jul 1, 2007

CFox posted:

I've seen a little of this kind of talk ever since the PS4 using 8 cores has been revealed but I don't think that when it comes to PC gaming the same line of thought will hold true, at least for the near future. The games on the PS4 and new Xbox will be coded to use all those cores effectively but when it comes to PC porting the developer can't count on having that many cores available and will likely make the games run on 2, maybe 4 cores since that's what the vast majority of PC owners are working with. I just don't think the marketshare of 6+ core cpus is in any way large enough for them to spend extra money supporting it.

Of course this is all in addition to the very good points that Factory Factory has already covered.

I will agree AMD is not a good buy right now (except maybe for budget builds) but, and I thought I pointed this out by labeling myself as a fanboy, I still want to believe in the day when more cores will matter, 'cause that is really the only advantage AMD has at the moment.

And really Factory, WoW, Starcraft, Skyrim, and iTunes are all obviously not optimized for more cores, I will agree. But Crysis 3 and consoles seem to think cores matter. As for iTunes, what happens when you throw a hundred MP3s at your iPod to re-encode? I could potentially see that benefiting from more cores (although 8 AMD cores that are about half as fast as Intel's 4 cores won't help, but maybe in the future damnit!). I guess the bottom line of my post is I want to believe in an optimistic future for AMD even if that is a pipe dream.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Colonel Sanders posted:

I will agree AMD is not a good buy right now (except maybe for budget builds) but, and I thought I pointed this out by labeling myself as a fanboy, I still want to believe in the day when more cores will matter, 'cause that is really the only advantage AMD has at the moment.

And really Factory, WoW, Starcraft, Skyrim, and iTunes are all obviously not optimized for more cores, I will agree. But Crysis 3 and consoles seem to think cores matter. As for iTunes, what happens when you throw a hundred MP3s at your iPod to re-encode? I could potentially see that benefiting from more cores (although 8 AMD cores that are about half as fast as Intel's 4 cores won't help, but maybe in the future damnit!). I guess the bottom line of my post is I want to believe in an optimistic future for AMD even if that is a pipe dream.

I try to avoid take pre-release/beta benchmarks into consideration, because the last months of development tend to be mostly about performance optimization. As such, the performance numbers for Crysis 3 are gonna change, and probably dramatically. Plus, the Crysis series has always been, shall we say, forward-looking.

That said, yeah, Crysis 3 is looking to be one of those games that loves cores, no ifs, ands, or buts. Well, there's an except: this particular set posted to the AnandTech forums seems to be an outlier, showing a more familiar Intel/AMD even while scaling with cores. It's a slightly older version (the alpha, rather than the beta), but there's also the difference that it selects Medium Quality over High or Very High. Maybe there's some graphical effect in HQ/VHQ that's threading/CPU dependent, forward-looking for many-core machines or just something that hasn't been ported to GPU code yet.

Also, if you encode 400 songs in iTunes, it will run them one after another on a single core; the encoder is not multithreaded at all. For all of Apple's chops at hardware and software, sometimes they do very silly things.

Part of the reason I'm so pessimistic about AMD's CPU performance right now has to do with Haswell and the overhead of multicore CPUs I talked about. Haswell will include new instructions that are expected to dramatically reduce that overhead basically for free. Here's an article on it, if you're interested.

Chuu
Sep 11, 2004

Grimey Drawer

Factory Factory posted:

Part of the reason I'm so pessimistic about AMD's CPU performance right now has to do with Haswell and the overhead of multicore CPUs I talked about. Haswell will include new instructions that are expected to dramatically reduce that overhead basically for free. Here's an article on it, if you're interested.

"Free" here refers to developer time, not theoretical performance.

Chuu fucked around with this message at 06:40 on Feb 23, 2013

teagone
Jun 10, 2003

That was pretty intense, huh?

Colonel Sanders posted:

I guess the bottom line of my post is I want to believe in an optimistic future for AMD even if that is a pipe dream.

Come on. Fanboys know to stop believing when poo poo is obviously a pipe dream.

lil sartre
Feb 12, 2009

by Y Kant Ozma Post
Man CPU benchmarks are so stupid

Factory Factory posted:

And for other workloads, those 8 cores for $200 are shamed by Intel's 4 cores for $200:

Yea, I'm gonna see a difference between 190 and 230 fps. My eyes are augmented.

quote:

From 80 to 120? Phlease, thats huge. Let me tell you about CS 1.6,

quote:

I roll with 1024x768, I'm old school like that.

Now I know they use those settings because at higher resolutions/settings (which people actually use) the GPU becomes the bottleneck and you're not gonna see big differences between a $150 and $500 CPU in most games. But that's the thing, if I only use my PC for games, why should I care a CPU is way better than another at settings I'm never gonna use? The only thing I care about is if a different CPU gonna take a game from unplayable to playable at say 1080p and high settings, if they're gonna do CPU gaming benchmarks that should be their goal.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Okay, here:





Yeah, some games will have CPU differences obscured or even obliterated by GPU bounding. That doesn't mean it will never matter.

VorpalFish
Mar 22, 2007
reasonably awesometm

Plus, if you buy in to techreports arguments (which seem pretty reasonable), average and even minimum FPS are not really good measurements for subjective gameplay experience because they don't always show spikes in frame animation on the microsecond level that can lead to choppy animation. CPU could actually matter a lot more than anyone's thought for the past couple of years when it comes to a smooth experience:

http://techreport.com/r.x/cpu-gaming-2012/skyrim-beyond-16.gif
http://techreport.com/r.x/cpu-gaming-2012/arkham-beyond-50.gif

Article here:
http://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus/2

Though it is too old to have included the 8350 over the 8150 I guess.

pigdog
Apr 23, 2004

by Smythe
^^ Wanted to post this.

lil sartre posted:

Yea, I'm gonna see a difference between 190 and 230 fps. My eyes are augmented.

That's because they are average FPS numbers over a longer benchmark period of time, most the time of which the numbers are GPU-limited in the first place. It's when there's really stuff happening on screen, new models and actors created and uploaded to GPU, new bits of map loaded, that's when weaker CPU will lag and stutter while a fast CPU will blaze through with a constant framerate. You can stare at a wall in any game on any system and get great FPS, but that's not a great indicator of the smoothness of actual gameplay.

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY
If Intel did decide there was a benefit to 8 cores in consumer processors, would it be relatively easy change for them to make? I ask because someone mentioned earlier that "AMD have an advantage with # of cores!", but that advantage doesn't exist if Intel can match them any time they feel like it.

By the way Factory Factory, what's your dayjob?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

coffeetable posted:

If Intel did decide there was a benefit to 8 cores in consumer processors, would it be relatively easy change for them to make? I ask because someone mentioned earlier that "AMD have an advantage with # of cores!", but that advantage doesn't exist if Intel can match them any time they feel like it.

By the way Factory Factory, what's your dayjob?

They could this in the ultimate ghetto fashion with no real thought: two dies, one package, QPI link. I'm pretty sure that would be straightforward enough.
That said, it would be pointless - there are already Xeons with up to 10 cores per die, so they could trickle that down

Edit: oh wait, they didn't have a replacement for the 10 core Westmere-EX? I assumed there'd be a Sandy Bridge or Ivy Bridge based version. Oh well, there are 8 core Sandy Bridge Xeons..

HalloKitty fucked around with this message at 20:54 on Feb 23, 2013

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Law student.

On a basic level, it's pretty easy, especially because Intel is moving towards an "IP block" design philosophy where they can just lift up functional units and plop them down into a new design. In practice, it's still pretty easy, because that's what Sandy Bridge E is - eight Sandy Bridge IA cores instead of four. Here's a block diagram comparison:




It's a bit hard to see, but basically the IA cores are identical. Each 1MB block of cache is identical. SNB-E is basically "SNB minus graphics, doubled." So it's really roughly as easy as AMD doing 2, 3, and 4-module Bulldozer chips - provide for the hookups to the intercore network, and you're done. That's harder than it sounds, but it's much less hard than the difference between the transition between single core CPUs and dual-core ones, because the work over the years has gone into making these designs multi-core-scalable in the first place.

Right now, multicore overhead has been pushing Intel towards fewer, beefier cores (see benchmarks above). They're still highly parallel internally and each generation gets more so, as well as each generation adding new instructions to more and more complex common tasks happen in fewer clock cycles, same as AMD does. But if TSX really takes off, you may see more cores on Haswell's successors.

Chuu
Sep 11, 2004

Grimey Drawer

Factory Factory posted:

But if TSX really takes off, you may see more cores on Haswell's successors.

Reading your posts, I think you're severely overestimating the impact TSX is going to have, especially on consumer parts. TSX isn't magically going to turn single threaded code into multi threaded code, it's just going to make dealing with the headaches of shared state a lot easier, and hopefully bring a conceptually easier memory model to higher level languages. It also does almost nothing for parallel jobs, which actually is a very large percentage of what people use their cores for today.

It also ignores that Intel loves to segregate the enterprise market as much as possible from the consumer market, and that you can no longer "cheat" on enterprise licenses since most major players have been moving to per-core from per-processor license fees.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Chuu posted:

Reading your posts, I think you're severely overestimating the impact TSX is going to have, especially on consumer parts. TSX isn't magically going to turn single threaded code into multi threaded code, it's just going to make dealing with the headaches of shared state a lot easier, and hopefully bring a conceptually easier memory model to higher level languages. It also does almost nothing for ridiculously parallel jobs, which actually is a very large percentage of what people use their cores for today.

I'm not saying it's going to be a magical multithreader. I'm saying it's potentially disruptive to the current cost/benefit balance of putting effort into multithreading. The process of locking a memory space on current CPUs can lead to otherwise-multithreaded code being executed sequentially. Besides the difficulty of synchronizing threads in the first place, it's one of the major labor sinks for mulithreaded coding. This inefficiency can be mitigated by locking in smaller increments, but this requires a lot more effort from the coder. TSX, if it works as advertised, will give you today's fine-grade locking performance in the programmer effort of coarse locking.

If it did work out, that would mean that the benefit of adding more cores relative to increasing per-core complexity would become greater, and maybe worth the cost in engineering and silicon to Intel. And if TSX turns out to be a flop, then nothing really changes.

quote:

It also ignores that Intel loves to segregate the enterprise market as much as possible from the consumer market, and that you can no longer "cheat" on enterprise licenses since most major players have been moving to per-core from per-processor license fees.

That may be true. TSX may end up enterprise-only. Then again, it didn't happen with MMX or any of the varieties of SSE or AVX, so who knows? Plus Apple uses non-Xeons in its mobile workstation PCs (read: Macbook Pro), and I'm sure they'd throw a hissy fit if TSX worked well and they couldn't fit a TSX-enabled chip in those systems.

Cybernetic Vermin
Apr 18, 2005

Chuu posted:

Reading your posts, I think you're severely overestimating the impact TSX is going to have, especially on consumer parts. TSX isn't magically going to turn single threaded code into multi threaded code, it's just going to make dealing with the headaches of shared state a lot easier, and hopefully bring a conceptually easier memory model to higher level languages. It also does almost nothing for parallel jobs, which actually is a very large percentage of what people use their cores for today.

I think you are vastly underestimating the number of extremely pessimistic global locks in software today. There are bound to be a lot of software that go from being just "multi-threaded" to basically scaling linearly in some operations with cheap memory transactions.

doomisland
Oct 5, 2004

HalloKitty posted:

Edit: oh wait, they didn't have a replacement for the 10 core Westmere-EX? I assumed there'd be a Sandy Bridge or Ivy Bridge based version. Oh well, there are 8 core Sandy Bridge Xeons..

Intel has 10 core E7 Xeons.

http://ark.intel.com/search/advanced/?s=t&FamilyText=Intel%C2%AE%20Xeon%C2%AE%20Processor%20E7%20Family&CoreCountMin=10

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib

Those aren't based on the Sandy/Ivy Bridge architecture though. They're from an older architecture, and there are no replacements for them, at least not based on core count.

Adbot
ADBOT LOVES YOU

doomisland
Oct 5, 2004

unpronounceable posted:

Those aren't based on the Sandy/Ivy Bridge architecture though. They're from an older architecture, and there are no replacements for them, at least not based on core count.

Ah, yeah I assumed with the new naming scheme they were updated version.

  • Locked thread