System Message

Intermittent downtime tonight until 12/13/2024 8:00 am CST
New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
dissss
Nov 10, 2007

I'm a terrible forums poster with terrible opinions.

Here's a cat fucking a squid.

LethalGeek posted:

It's comical as hell. The video completely disconnects, the monitor comes back a moment later and gives me the input overlay as if I just connected something. Sometimes several times back to back. Along with other just not having a good time with their video drivers ever Nah I'll pass on their stuff video wise.

There are a bunch of different issues being discussed in that thread, none of what you'd call common - remember there are hundreds of millions of systems with only Intel integrated.

I'm sure if you visit the AMD support pages you'll run into a bunch of issues too.

Adbot
ADBOT LOVES YOU

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


Literally two-thirds of all general-purpose PC users use Intel integrated or motherboard graphics as a sole or primary video interface. I'm stunned there's that few complaints.

SYSV Fanfic
Sep 9, 2003

by Pragmatica

Sir Unimaginative posted:

Literally two-thirds of all general-purpose PC users use Intel integrated or motherboard graphics as a sole or primary video interface. I'm stunned there's that few complaints.

I'm surprised they even acknowledge the issue instead of telling people to exchange the systems. My bet is on a fab defect.

Back on topic: AMD Continues to snag large corporate contracts.

I feel bad for the team that had to act like this was a huge deal.

LethalGeek
Nov 4, 2009

dissss posted:

There are a bunch of different issues being discussed in that thread, none of what you'd call common - remember there are hundreds of millions of systems with only Intel integrated.

I'm sure if you visit the AMD support pages you'll run into a bunch of issues too.

Like I said any time I've had to deal with Intel video stuff has been nothing but ugh so :shrug: ATIs (& Nvidia's) stuff is way more fleshed out.

Edit: The computer computers in the house are Intel CPUs cause pfff come on AMD get it together.

vvvvv This is the kind of thing I'm talking about, work had a lot of just weird things happening until we turned off hardware acceleration on anything with Intel based video in it. Just lots of Nope, No Thanks with them.

LethalGeek fucked around with this message at 18:16 on Feb 13, 2015

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I bought a Haswell i3 for my HTPC and there is a bug/issue with the drivers that caused my CableCard delivered video to Windows Media Center to chug and black out and generally not work. I haven't tried it since because I bought an GTX 750 Ti for it, but it was annoying as gently caress.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
I was looking at AMD APU video benchmarks. There doesn't seem to be a lot of gain since trinity. Has AMD stated they are only working on power consumption or did they realize killing off the low end card market was a bad idea?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

SYSV Fanfic posted:

I was looking at AMD APU video benchmarks. There doesn't seem to be a lot of gain since trinity. Has AMD stated they are only working on power consumption or did they realize killing off the low end card market was a bad idea?

With only dual-channel DDR3 to work with, they're pretty bottlenecked. Their low-end cards have far more memory bandwidth to work with. Within that constraint, power consumption has indeed become their priority, and the gains have been pretty solid. The A10-7800 at 45W performs like the A10-6800K at 100W in many titles. The next big thing is going to be applying Tonga's (R9-285) end-to-end memory compression scheme I think, which should help a lot, but I'm not sure if that's coming in the next generation or if it will have to wait for Zen uarch APUs in 2016 (which will also benefit from DDR4).

SYSV Fanfic
Sep 9, 2003

by Pragmatica

Factory Factory posted:

With only dual-channel DDR3 to work with, they're pretty bottlenecked. Their low-end cards have far more memory bandwidth to work with. Within that constraint, power consumption has indeed become their priority, and the gains have been pretty solid. The A10-7800 at 45W performs like the A10-6800K at 100W in many titles. The next big thing is going to be applying Tonga's (R9-285) end-to-end memory compression scheme I think, which should help a lot, but I'm not sure if that's coming in the next generation or if it will have to wait for Zen uarch APUs in 2016 (which will also benefit from DDR4).

Thanks, makes sense. Being a poor trinity was very exciting to me in terms of price/vs performance. Being able to run skyrim for $275 was pretty great, even at 1366x768.

SwissArmyDruid
Feb 14, 2014

SYSV Fanfic posted:

I was looking at AMD APU video benchmarks. There doesn't seem to be a lot of gain since trinity. Has AMD stated they are only working on power consumption or did they realize killing off the low end card market was a bad idea?

Factory Factory posted:

With only dual-channel DDR3 to work with, they're pretty bottlenecked. Their low-end cards have far more memory bandwidth to work with. Within that constraint, power consumption has indeed become their priority, and the gains have been pretty solid. The A10-7800 at 45W performs like the A10-6800K at 100W in many titles. The next big thing is going to be applying Tonga's (R9-285) end-to-end memory compression scheme I think, which should help a lot, but I'm not sure if that's coming in the next generation or if it will have to wait for Zen uarch APUs in 2016 (which will also benefit from DDR4).

Come on, guys, I know you were both reading page 86.

I'll give you the teaser from an AMD patent filing in 2012:

SYSV Fanfic
Sep 9, 2003

by Pragmatica

SwissArmyDruid posted:

Come on, guys, I know you were both reading page 86.

I'll give you the teaser from an AMD patent filing in 2012:



Now that I looked up HBM and re-reading your post it sounds pretty good.

Horribly expensive, but good. Guess it is too early to know the cost per MB of HBM?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I know there was a tenuous "Socket FM3" to go with Zen, but part of me wonders if AMD will say gently caress it and make the CPUs and APUs fit a G34 socket size if they adopt HBM L3 cache or something even more complex. Would this provide any advantage?

PC LOAD LETTER
May 23, 2005
WTF?!
G34 has all those pins for extra memory slots + allowing CPU's to communicate, its not needed for HBM or any other on package memory really.

Socket size isn't the limitation for HBM. Its price that could (probably will) kill the idea for low cost APU's which is the way AMD has to price them in order to sell them.

Slapping HBM on their APU's might allow them to get mid-tier-ish GPU performance but also means they'll have to price them lots higher just to break even. WAG on my part here but $2-300 would probably be the price range of a 'high end' APU with a 1-2GB HBM cache on package via interposer. Even tied to Excavator CPU's the performance vs price wouldn't be bad but most enthusiasts, and those are the ones who'd be interested, probably wouldn't bother with it for that price. A low end Intel chip and a mid range dGPU would probably be better value over all even if it ends up costing a bit more.

It sucks but I think they're stuck being bandwidth limited with their APU's for a long time.

SwissArmyDruid
Feb 14, 2014

PC LOAD LETTER posted:

G34 has all those pins for extra memory slots + allowing CPU's to communicate, its not needed for HBM or any other on package memory really.

Socket size isn't the limitation for HBM. Its price that could (probably will) kill the idea for low cost APU's which is the way AMD has to price them in order to sell them.

Slapping HBM on their APU's might allow them to get mid-tier-ish GPU performance but also means they'll have to price them lots higher just to break even. WAG on my part here but $2-300 would probably be the price range of a 'high end' APU with a 1-2GB HBM cache on package via interposer. Even tied to Excavator CPU's the performance vs price wouldn't be bad but most enthusiasts, and those are the ones who'd be interested, probably wouldn't bother with it for that price. A low end Intel chip and a mid range dGPU would probably be better value over all even if it ends up costing a bit more.

It sucks but I think they're stuck being bandwidth limited with their APU's for a long time.

I don't think so, not quite as much.

We honestly don't know the pricing for HBM right now, but if I had to take a guess, its eventual pricing cannot be much more than mass-production GDDR5. If it isn't, it doesn't make sense to use it across an entire product line the way Nvidia wants to do with Pascal. Because that's that entire architecture's thing.

I really do not think that it will cost anything remotely approaching Intel's 128 MB of eDRAM. A lot of the cost incurred there is due to the cost of the interconnect, something neatly dodged by how HBM is built from the ground up.

Now, casual information that I have on bulk GDDR5 is from the BOM breakdown of the PS4, which then was said to be $110-$140 for the 8 GB of GDDR5. Per gig, that turns out to be $17.50. That's... not really that much.

I feel that any increase in cost will be large enough to cover the cost of BOM + interposer + added complexity, but remain low enough to allow AMD to market the part to ultrabook makers. It's contingent on them that they get the pricing on this absolutely right.

Pricing should come down even further with HBM2, which is slated to hit the market in 2016. Given that HBM just began sampling risk production back in... I want to say August? That's a phenomenally fast dev cycle for doubling capacity, which means that it must be something they were already cooking + die-shrink.

And really, given that current Intel parts with onboard memory only have 128 MB, I think AMD stands to gain a lot more performance with the 1 GB minimum per stack.

SwissArmyDruid fucked around with this message at 11:20 on Feb 14, 2015

PC LOAD LETTER
May 23, 2005
WTF?!
Supposedly its not so much the memory costs that get you with HBM. Its the cost of the interposer, which is effectively a huge die IC fabbed on a 'coarser' process, plus the cost of assembly and testing. The interposer has to be gigantic since it sits under the die, Fiji has a rumored 550mm2 die while Carrizo has a ~250mm2 die size, and must be still big enough to allow the HBM memory packages to be mounted to it too. Smaller multi die interposers to bring down costs are something that is being worked on but I don't think its working outside of a lab yet.

IIRC HBM2 is about giving you more bandwidth and/or RAM capacity and not necessarily bringing down costs. Depending on how they configure the GPU (ie. going all out for more bandwidth or RAM capacity) you might see costs go up or stay the same.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

PC LOAD LETTER posted:

G34 has all those pins for extra memory slots + allowing CPU's to communicate, its not needed for HBM or any other on package memory really.

Socket size isn't the limitation for HBM. Its price that could (probably will) kill the idea for low cost APU's which is the way AMD has to price them in order to sell them.

I was thinking simply that physical space might be a limiting factor for such a decision, not whether it was specifically a limitation on HBM.

Thus the real challenge is trying to get a CPU with ~50-100$ performance without adding more than ~100$ to the total cost to justify it? If they can get Zen to have Skylake like performance (achievement in itself), I could see the costs being justified on A8 and A10 processors.

Based on released information, maybe AMD does realize there might be a cost problem, hence the usage of ARM cores for the low end of Zen. A4 and A6 are ARM, A8 and A10 are x86? Maybe the replacement of the FX series are A6s and A10s with the GCN deactivated to give the processor cores unlimited access to the L3 cache?

PC LOAD LETTER
May 23, 2005
WTF?!
The HBM packages themselves are tiny. About the size of a small pill, so size isn't an issue for putting them on a FM3 package.


Stolen from this thread which BTW has tons of great info. and discussion on the subject about HBM and Fiji in general.

It was other types of RAM that would've had possible packaging issues with FM3 due to size. I suppose they could've just left the pin out the same size and extended the package out to one side and plopped the RAM there if they really wanted to use something off the shelf and not do a custom RAM package. There is no indication that they ever wanted to go that route though.

Zen we have hardly any information, branding or otherwise, at all on so only the most wild of WAG's are possible about it at this point. I would think if they can get near-Skylake performance at similar TDP they'll be thrilled. Skylake-ish performance with a significantly higher (10-25w) TDP is probably more realistic given the process and R&D disparity. I hope they can pull that off. If they do and they sell it for a bit less, like historically they did normally, it could be a successful chip.

PC LOAD LETTER fucked around with this message at 15:06 on Feb 14, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
So even if they say, double the number of GCN cores, it'd still comfortably fit on the FM3 package? HBM and DDR4 potentially makes more than 8 GCN cores worth it, so maybe my poorly worded question is better put as "Would they go with a larger chip if the increased the number of GCN cores?" I wasn't arguing about the size of the HBM, SAD had corrected me earlier upthread. Still fascinating to read and learn.

If they only manage a 10w disparity and close performance than it might start becoming a toss up between the two brands, especially if the Zen design shows a low level of maturity and they can eventually get more out of it.

PC LOAD LETTER
May 23, 2005
WTF?!
Yea it'd probably fit. Even at 500mm2 the die would be still quite smaller than the package. Would they actually do that? I dunno. Kinda doubt it. That'd be an expensive die that they couldn't sell for a low price.

Rastor
Jun 2, 2001

WCCFtech claims to have had "the chance to talk to a guy high up in the AMD supply chain", which they split into four parts:

Part 1: About AMD sticking with 28nm. "Nolan APU is on the 28nm process ... it will only be a ‘mild’ update to the Beema/Mullins platform and be FP4 BGA packaged"

Part 2: "AMD actually had a desktop Carrizo planned, something that was later canceled ... they decided to reiterate Kaveri with a refresh named Godavari. If they do decide to go forward with a Carrizo Desktop, it won’t arrive till H1 2016"

Part 3: "Zen will be AMD’s focus for the duration of 2016 ... AMD will be releasing Zen for the server side first, then workstations and then HEDT (mainstream market)"

Part 4: "[Intel] Skylake has something brand new – Intel keeping an above top secret attitude even after NDA ... Our source stated that he is fairly certain AMD knows about this ‘new’ feature and will try to incorporate it into Zen"


And bonus WCCFT crap, just in case four helpings wasn't enough:

Carrizo presentation leaked: "The die is still based on a 28nm node yet AMD has managed to optimize the overall chip design by adding 29% more transistors than Kaveri thanks to the high-density design library. This results in a 3.1 Billion transistor die that delivers 40% lesser power consumption and 23% lesser die area than its predecessor. The H.265 encode support allows 3.5 times transcode performance of Kaveri while the compute architecture enables the 8 GCN compute units (512 stream processors) a reduction of 20% in power consumption."

SwissArmyDruid
Feb 14, 2014
PSA, WCCFT can be full of crap, trust nothing unless you've got a second corroborating source other than the one that they list that has independently verified the story instead of just parroting.

It's a good practice in general, but particularly pertinent for WCCFT.

That aside, interesting to see that AMD is aping Intel by targetting servers and mobile, and letting binned parts flow to desktop. I think that's the best approach, and there's no shame in stealing a better way of doing things.

Also, I'm pretty sure that AMD *tried* to make a MorphCore competitor. It was that thing that they were a part of the consortium last year that they debuted and were looking for buyers, wasn't it? I gotta look back through my post history to remember the name.

EDIT: Soft Machines and the VISC architecture.

SwissArmyDruid fucked around with this message at 20:44 on Feb 23, 2015

JawnV6
Jul 4, 2004

So hot ...

Rastor posted:

Part 4: "[Intel] Skylake has something brand new – Intel keeping an above top secret attitude even after NDA ... Our source stated that he is fairly certain AMD knows about this ‘new’ feature and will try to incorporate it into Zen"
It's still so odd to see "feature" covering something as minor as "ucode trick to hide a latency" up to something as complex as "25% die space, a few decades of validation, and a complete overhaul of every OS".

Rastor posted:

the 8 GCN compute units (512 stream processors)
GCN reads as gamecube to me. The lil console that could, still having life 15 years later.

Zeta Niloticus
Nov 6, 2007


JawnV6 posted:

GCN reads as gamecube to me. The lil console that could, still having life 15 years later.

You could always install Linux on it.

quote:

Linux can be used:
  • to use a GameCube/Wii as a thin client
  • to use a GameCube/Wii as a multimedia terminal
  • to use a GameCube/Wii as a tiny PowerPC-based server
  • as a runtime environment for homebrew development

SwissArmyDruid
Feb 14, 2014
Another design win for AMD. MediaTek is going to start licensing their graphics. http://www.fudzilla.com/news/graphics/37209-mediatek-to-license-amd-graphics

I like to think that it must have been because AMD knows how to play nice with ARM on the same die.

Should be interesting, though. AMD sold off Imageon for way too goddamn little to Qualcomm, which now forms the basis for their Adreno graphics. So technically, this is ATI vs. AMD.

This, plus all the glNext getting rid of embedded OpenGL news should make things pretty interesting.

SwissArmyDruid fucked around with this message at 05:30 on Mar 11, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I wonder if this is more than a mere licensing deal? There is a lot of potential profit with some careful sharing of institutional knowledge. This might be what AMD needs for ARM Zen cores, a good step into the mobile market with their strongest foot, and this could give Mediatek the advantage to push an excellent midrange series of tablets, or even break into servers, laptops and desktops (pipedream?).

Also, I would find it high comedy if AMD and Mediatek sometime in the future push Qualcomm to desperation, and suddenly there is an offer for 65 million dollars.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

SwissArmyDruid posted:

This, plus all the glNext getting rid of embedded

It has a name now: Vulkan, and being based on Mantle, it goes right to the heart of what AMD said: they'd be happy to share.

I really hope Vulkan gets huge support, although Microsoft wouldn't be so happy, as dx12 won't be such a draw to Windows 10.

SwissArmyDruid
Feb 14, 2014

HalloKitty posted:

It has a name now: Vulkan, and being based on Mantle, it goes right to the heart of what AMD said: they'd be happy to share.

I really hope Vulkan gets huge support, although Microsoft wouldn't be so happy, as dx12 won't be such a draw to Windows 10.

The king is dead, long live the king.

What I want to see, actually, is whether or not Vulkan can run on existing silicon, as DX12 will. That, I think, will be the biggest draw to widespread adoption beyond "we need a graphics API but we're not building for a Windows platform"

Just as a purely scientific experiment, I'd love to see if any additional performance can be eked out of the PS4 using it, and then seeing how it stacks up a DX12-enabled Xbone.

NyxBiker
Sep 24, 2014
Is the AMD Opteron 4334 optimal for a small-medium web hosting company? (On a Dedi)

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

NyxBiker posted:

Is the AMD Opteron 4334 optimal for a small-medium web hosting company? (On a Dedi)

There's no way it's optimal and I have doubts it's even acceptable. It's about half the per-core performance of an Intel Ivy Bridge Xeon, and there are now Haswell Xeons with solidly better performance per watt than Ivy. The scale-out prospects on an old AMD CPU are just awful. If you did Atom or ARM microservers you'd get far better performance per watt at a similar cost, with the downside of lower peak performance. And you could just keep some buff Xeon servers around for the relatively small number of heavy load customers, and they'd be the best of everything except for up-front hardware costs.

SwissArmyDruid
Feb 14, 2014
Somehow, we all missed this news: http://www.bbc.com/news/technology-25635719

China lifted it's ban on video game consoles, and the news broke January of this year. AMD stands to profit, since the Chinese market is ostensibly huge, and, what do you know? AMD runs parts of all three consoles.

Beautiful Ninja
Mar 25, 2009

Five time FCW Champion...of my heart.

SwissArmyDruid posted:

Somehow, we all missed this news: http://www.bbc.com/news/technology-25635719

China lifted it's ban on video game consoles, and the news broke January of this year. AMD stands to profit, since the Chinese market is ostensibly huge, and, what do you know? AMD runs parts of all three consoles.

From what I've seen so far with the Xbox One officially released and PS4 available by grey market, the market for the new consoles is small. The consoles are expensive, you can't pirate their software and the lack of F2P software means the Chinese aren't really showing interest, they are sticking with the last gen consoles and PC for their gaming needs.

PC LOAD LETTER
May 23, 2005
WTF?!
That will probably change over time though. The Chinese market is too potentially big for them not to tune their game line up and adjust pricing to accommodate.

Nintendo Kid
Aug 4, 2011

by Smythe

PC LOAD LETTER posted:

That will probably change over time though. The Chinese market is too potentially big for them not to tune their game line up and adjust pricing to accommodate.

No not really. They haven't bothered to really do it for places like Brazil and India, why do it for China?

SwissArmyDruid
Feb 14, 2014

Beautiful Ninja posted:

From what I've seen so far with the Xbox One officially released and PS4 available by grey market, the market for the new consoles is small. The consoles are expensive, you can't pirate their software and the lack of F2P software means the Chinese aren't really showing interest, they are sticking with the last gen consoles and PC for their gaming needs.

As the licensor of the tech that goes into the systems, AMD doesn't give two shits if things don't sell in China. They only care if Sony/Microsoft/Nintendo ramps up production to have supply in China. Once the silicon leaves their hands and the money goes in their pockets, they could not give a drat.

PC LOAD LETTER
May 23, 2005
WTF?!

Nintendo Kid posted:

No not really. They haven't bothered to really do it for places like Brazil and India, why do it for China?
I dunno and when you put it like that yea they might not. Seems strange to let a market go like that to me I guess.

Nintendo Kid
Aug 4, 2011

by Smythe

PC LOAD LETTER posted:

I dunno and when you put it like that yea they might not. Seems strange to let a market go like that to me I guess.

It's not really letting a market go, the console still costs a certain amount of money to make, you can't cut the price of it too much and still be able to make up the rest in games (like you usually do). Similarly making special region-only games if there's not any local developers is just wasting a bunch of money if they don't sell.

Ragingsheep
Nov 7, 2009
I thought the margins on the consoles weren't great in any case - good enough for AMD to keep the lights going, but not a massive money earner.

Nintendo Kid
Aug 4, 2011

by Smythe

Ragingsheep posted:

I thought the margins on the consoles weren't great in any case - good enough for AMD to keep the lights going, but not a massive money earner.

For the console makers themselves, they typically run negative margins the first few years, which is made up for by a few game sales and the licensing fees charged to publishers. For the chip makers, well, AMD gets a flat license fee since they don't really own their own production lines for the console makers to buy from.

Phoenixan
Jan 16, 2010

Just Keep Cool-idge

Nintendo Kid posted:

No not really. They haven't bothered to really do it for places like Brazil and India, why do it for China?
If anything, this could just mean they simply end up with a remodeled PS1 or PS2, much like how Brazil ended up with a Sega Genesis 3.

Wistful of Dollars
Aug 25, 2009

AMD cuts ‘Bulldozer’ instructions from ‘Zen’ processors

Hopefully a good sign.

Adbot
ADBOT LOVES YOU

Lord Windy
Mar 26, 2010
In this thread, does anyone explain the problems that Bulldozer had? I am interested and want to read up on it.

  • Locked thread