Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SwissArmyDruid
Feb 14, 2014

by sebmojo
It looks like if you just run this tool as-is, you may run into a contention issue where DWM is holding on to part of the memory so you can't test the full VRAM space. Anyone know how to run a standard Win7 Pro machine headless for better benchmarking?

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo

1gnoirents posted:

Are you having a problem?

In any case, your comparable options would be 290 or 290x

Or wait for the 380X to come out soon.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Don Lapre posted:

Dont forget the phase change cooling unit.

Scuttlebutt sez that the top-of-the-line part will have the factory AIO cooler as in the 295X2.

HalloKitty posted:

You can always try disabling desktop composition (Control Panel > System and Security > System > Advanced System Settings > Performance Settings > Enable desktop composition).

One step better: Unplugged the monitor before running it. Was then able to address the entire memory space (outside of what the benchmark was reserving for itself.)

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

How can you come to this conclusion? Legitimate question, I'm not seeing how nvidia can't get more out of their GPUs, and it honestly looks like the Radeons are heading for "super" Fermi territory. TBH, I am ignorant of the details, which is why I ask.

At the very least, we know that AMD is about to leave nVidia in the dust with regards to memory bandwidth. The next high-end cards that AMD will release will feature Hynix's stacked memory modules. You may remember several years back when nVidia was touting their own stacked memory products which were then delayed to Pascal. This is because their own version of non-planar memory, which they were working on with Micron and calling Hybrid Memory Cube (HMC) fell through and the project was dropped. They will now shoot to use the joint AMD-Hynix High Bandwidth Memory (HBM) instead. The first generation of HBM products which is touted to, on paper, permit memory bandwidth 1 gigamegabit wide. The second generation of the memory standard, which they are working on presently, seeks to double this, as well as capacity per-stack.

For comparison's sake, a GTX 980 has only a 256-bit memory bus. A Titan Black only has a 384-bit memory bus. (The Titan Z also only has a 768-bit memory bus, but that's a dual-GPU card, so still, 384-bit per-GPU.)

All the memory efficiency and color compression touted by Maxwell is about to be obsoleted by this advancement.

We should not expect 1:1 performance improvements due to the increase in bandwidth, by which I mean that the full-fat Fiji XT should not suddenly triple the performance of a Titan Black. But it gives AMD a serious leg up, as nVidia will not bring non-planar memory products to the market for at least one year.

SwissArmyDruid fucked around with this message at 04:18 on Jan 25, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

I thought only the R9 390s were to be HBM and everything else was using standard GDDR5, which explained the rumored TDP of the 380(X)? I mean, maybe they capture the enthusiast market for a year and half, Nvidia can just reclaim it with the 1000 series, correct? Are they at such a legitimate technological disadvantage that Nvidia could rollout first gen HBM on the 1000s just as AMD drops second gen cards with better thermals/consumption?

An argument could be made for the 380X using HBM as well. If AMD's implementation of the memory controller involves placing the entire HBM package directly onto the GPU die, as opposed to breaking it out to a second package, yes, that might explain where the extra heat is coming from.

I don't think this is likely, though, at least, not until the second generation of HBM products. The first-generation HBM products offer 1 GB of memory in a stack 4 wafers tall. To get the kind of memory capacities that a GPU needs, you'd need one stack per GB of conventional GDDR5, and that would make the die size balloon out of control. No, I think that if they're going to do that, it's not going to be until the gen 2 HBM parts. Those are slated to come in formats stacked either 4 or 8 layers high, in 4 GB or 8 GB per stack. PER STACK!

Just imagine what that would mean for AMD's APU parts, which have had trouble scaling their performance higher because of how bandwidth-limited they are. AMD has basically not been putting as many GCN cores onto their APUs as they COULD be putting, because the latency and bandwidth from talking to DDR3/4 just isn't usable.

But to answer your question about nVidia reclaiming a lead: Any advantage that AMD develops or maintains will, I think, depend entirely on the implementation of the memory controller going forward, whether the HBM gets mounted directly on the GPU die, or on separate packages similar to what we have now. Regardless of how they do it, I think we can expect much smaller cards, physically-speaking, since there won't need to be as many chips on the board to parallelize DRAM to enable throughput. AMD has been sampling these HBM products since, I think, September of last year. That's a lot of time on top of until when Pascal comes out to play around with the optimal layout for GPUs. If they play their cards right, they should be able to grow this lead into a good gain of market share.

But I think we won't be able to know for sure until we see Bermuda XT. (390X)

SwissArmyDruid
Feb 14, 2014

by sebmojo

iuvian posted:

Nice review.

I was wondering why all the previous 960 reviews forgot to mention that you can get a 280 OC for under $200 with a gig more ram.

Probably because the more appealing route is to go up about $50 and get an R9 290, with GCN silicon.

EDIT: Which, I note, *will* support DX12, whereas with the rebranded 280-that-is-secretly-a-rebranded-7970 may or may not, things are a bit hazy.

SwissArmyDruid fucked around with this message at 01:55 on Jan 26, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
Do we not care about the reference cooler? *I* thought having the cutout on the backplate to allow for airflow is pretty nice.

SwissArmyDruid
Feb 14, 2014

by sebmojo

The Lord Bude posted:

Why on earth would anyone use a reference cooler unless they absolutely had to? (ie super cramped low airflow case where you've got no choice but to make sure all the exhaust air leaves the case; mATX SLi)

99% of people would be in a position where a non reference design would be substantially quieter and tens of degrees cooler.

I suppose you've got weirdos enthusiasts who remove the reference cooler and go with aftermarket liquid but they're an edge case at best.

If I do get a 900-series, it will probably be a reference design, yes. I intend to put it into the ML07B when it comes out.

SwissArmyDruid
Feb 14, 2014

by sebmojo

The Lord Bude posted:

The ML07B is not one of the low airflow cases I was talking about. It is an incredibly well engineered case that manages pretty drat good cooling - it has 3x120mm fan slots, although you have to supplthem yourself - combined with the small dimensions that makes for some serious airflow. In addition, the intake for a non reference graphics card is hard up against two of the fan vents, creating almost perfect conditions for a non reference cooled graphics card. You'd be shooting yourself in the foot buying a reference design card.

Also - the ML07B is virtually identical in layout to the RVZ01, other than cosmetic differences, they are the same case - Except the RVZ01 comes with two of the three fans preinstalled, and it comes with dust filtering for all the intakes, something you can't get on the ML07B.

The RVZ01 looks too Alienware-y for my tastes, hence the ML07B.

There's still nowhere for that exhaust on a non-reference board to go other than inside the case. There are no fans pushing air around inside the case, so I'd rather either stick an Nvidia reference design in there so that it acts like my power supply: Sucks air in, then vents it directly back outside.

I suppose some of this will become moot when the RVZ02B comes out later this year. I can actually tolerate the new case's design, assuming they bring the vent panels back flush with the rest of the sheet metal.

SwissArmyDruid
Feb 14, 2014

by sebmojo

beejay posted:

I'm never surprised when {major corporation} does something shady. I feel like this forum would absolutely crucify AMD if it was them instead of nvidia though.

I don't know about the rest of this forum, but I let AMD have it with their rebrand poo poo from 280X on down. If I wanted a 7970, I'd have bought one!

SwissArmyDruid
Feb 14, 2014

by sebmojo

HalloKitty posted:

Is this parody? NVIDIA have rebranded cards before, I seem to recall one card being named 3 different times, and I remember one card name possibly meaning one of 3 configurations.

Let's not ever try to pretend one has better naming than the other.

No, it's not parody. If anything, it would have been satire, if I hadn't meant it genuinely. If I wanted a 7950, I would have bought one, and not a 280, because the only thing a 280 offered was a fresh coat of paint. Just the same as the 280X was a rebranded 7970.

Rebranding is lovely all around.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Brad Wardell explains the difference between DX11 and DX12 like you're a five-year-old.

http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_DirectX_12_oversimplified

SwissArmyDruid
Feb 14, 2014

by sebmojo

Ragingsheep posted:

So when is the r3XX supposed to be coming out?

380X in February. So, let's say March for that.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Mr.PayDay posted:

And how many of the raging gamers would have noticed the issue?

Clearly SOMEBODY did over in Europe, who had the programming chops to back up their claims with science, which led to that benchmarking too, which led to where we are now.

SwissArmyDruid
Feb 14, 2014

by sebmojo
SHOTS FIRED, REQUESTING BACKUP, REPEAT, SHOTS FIRED

SwissArmyDruid
Feb 14, 2014

by sebmojo

Josh Lyman posted:

I will straight up trade my Gigabyte WindForce R9 290 for a GeForce 970. :smug:

Honestly, you of all people should know, you fellow unrepentant West Wing-ophile. Josh would probably quote Sun Tzu would advise you to take advantage of your enemy's weakness and attack.

SwissArmyDruid
Feb 14, 2014

by sebmojo

sauer kraut posted:

I hope some idiot who's panic selling his MSI 970 buys a referebce blower R9 290 just like that to stick it to the man.
Don't buy that card unless you really like the smell of melting plastic and having a miniature jet engine sitting next to you.

I continue to believe that AMD is shooting themselves in the foot with regards to user experience by sticking with their two-DVI-and-some-DP-maybe-a-mini-DP-oh-and-you've-got-to-have-a-HDMI-out! I/O. When the first 200-series cards came out, some people did mods to their retention brackets, cutting out almost everything except for a thin border. This did not reduce temperatures significantly. However, what it *did* do was reduce the backpressure caused by the restrictive bracket, removing a lot of what was obstructing airflow, and changing the frequency of the air being pushed by the blower to one less irritating to the human ear, with the effect of making it subjectively quieter. TL;DR, "EEEEEEEEEEEE" to "whooooooosh".

I think that if AMD wants to stop loving themselves over with their own I/O, they should abandon all DVI ports, and just have quad DisplayPort/triple DP and one HDMI on one layer, with every other bit of the retention bracket an open grid to allow maximum airflow. And if someone still needs the DVI, throw in a passive adapter or two. (But really, Intel, AMD, Dell, Lenovo, Samsung and LG have all committed to phasing out DVI since 2010 anyways. Except for Nvidia, who still haven't committed because they're stubborn fucks that don't play nice with ANYONE, especially not AMD. :barf:)

It would have the side-effect of making sure that people use the correct connection to take advantage of Freesync/Adaptive Sync as well, something I'm sure AMD really, really, really wants people to use as soon as possible and as quickly as possible. Bonus!

SwissArmyDruid fucked around with this message at 12:57 on Jan 29, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

The Iron Rose posted:

What do I say to a dude who's advocating a 5840 over a 750 ti for budget systems?


I mean this is also the dude with a raging hateboner for Nvidia to the point where he thinks a 290 is the better card over a 970, noise and thermals don't matter at all, and bought a 700w PSU for a system pulling 200 at most. Nevertheless, is he completely insane here or am I legitimately missing something here?

With regards to 5840 vs the 750 Ti, he's insane.

With regards to the 970 vs the 290, not *quite* as much. A price/performance ratio argument _can_ be made for the 290 that's only made possible thanks to AMD's aggressive pricing + rebates. I refer you to one of Tech Report's scatter plots:



But that completely ignores the other issues you've mentioned like thermals and noise. Perhaps he does all his gaming with a good pair of over-the-ear headphones? That's one way of making noise a non-issue.

But holy crap, is he completely missing the point on 80+ with that 700W power supply. :doh: https://www.youtube.com/watch?v=dOXTZizoknc

SwissArmyDruid fucked around with this message at 22:36 on Jan 30, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
Wow, AMD is *really* starting to bring the guns to bear, here. http://www.newegg.com/Product/Product.aspx?Item=N82E16814125499&cm_re=290x-_-14-125-499-_-Product

290X for $299 + $20 rebate.

I think this is, more likely than not, a happy coincidence where AIBs are ramping up cleaning out their old inventory in advance of the 300-series cards, but still. That's a pretty compelling argument if you can deal with the card's foibles.

SwissArmyDruid
Feb 14, 2014

by sebmojo

sauer kraut posted:

Who cares about the cruel joke that is Hawaii, if they wanna make a fuss unload (good) 280X'es for 180$ RRP.

As much as I tend to orbit in AMD's product stack, I really do believe that if you're going to buy an AMD card, it should not be a rebranded 7000-series card, for any reason other than bitcoining.

I am, of course, a filthy capitalist American pig-dog, so...

SwissArmyDruid
Feb 14, 2014

by sebmojo

Factory Factory posted:

Poor Nvidia this week... the assholes.

You know how they totally hypothetically could support FreeSync but aren't? That's because they're re-implementing it and calling it G-Sync for use in gaming laptops. As was extremely obvious yet they always denied, the G-Sync module was never required except to bring adaptive sync to non-mobile hardware. It's just an overpriced little monitor driver board that Nvidia can capture BoM dollars with.

You'd think they've have finished with the FPGA boards and started shipping ASICs by now as well, to bring some of that BoM down.

Well, hopefully they realize that AdaptiveSync is good for the entire industry as a whole and (sheepishly) join the bandwagon.

SwissArmyDruid
Feb 14, 2014

by sebmojo

:eyepop: Dios mio...

SwissArmyDruid
Feb 14, 2014

by sebmojo
And they're calling it 980 Ti instead of Titan. Hrm. Probably still gonna be called a Titan anyways.

SwissArmyDruid fucked around with this message at 18:43 on Feb 3, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
Tweet from Robert Hallock over at AMD: https://twitter.com/Thracks/status/561708827662245888

The AMD biases aside, he's not wrong? Civ: Beyond Earth's implementation of Mantle does something like this when run in Crossfire, where one card renders the top half of the screen, and the other card renders the bottom half of the screen, but it remains to be seen if DX12 can do anything resembling this, or indeed, if it's even better to do than AFR.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Beautiful Ninja posted:

At least on gaming notebooks, the adaptive sync found in eDP is being enabled in software as "GSync" by nVidia even though it's actually FreeSync. I don't think we know yet if that's going to be able to happen in regular monitors using FreeSync in the future.

It's not FreeSync, it's Nvidia taking the platform-agnostic AdaptiveSync spec and then slapping the G-Sync name onto it.

* eDP (embedded DisplayPort) has had the capability to send VBLANK signals to tell the LCD to just refresh whatever image is already being displayed since 2012, at least.
* DisplayPort 1.2a is a revision that adds eDP's VBLANK signal over to desktop monitors.
* AdaptiveSync is the name for this technology on desktop monitors.
* FreeSync is AMD's name for enabling variable refresh using AdaptiveSync.
* G-Sync uses their own scalar and doesn't use AdaptiveSync
* Mobile G-Sync does not use a special scalar, but just uses eDP's VBLANK signal on laptop monitors.

Yeah, it's complicated, huh?

SwissArmyDruid fucked around with this message at 20:06 on Feb 3, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

BurritoJustice posted:

Mobile GSync isn't FreeSync. Desktop GSync isn't FreeSync. Nvidia has done nothing shady.

I mean I know the Nvidia hate train is in full steam right now, but come on people.

But NVidia is taking an industry standard tech and putting a proprietary name on it. (AMD is also doing this, and I also don't like it, but I like them pushing the industry standard as opposed to a proprietary black box adding to BOM.)

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

Poor, sweet, innocent nVidia, they just happen to not have implemented DP 1.2a. This has nothing at all to do with trying to lock in its customers, no sir. They'll get right on that DisplayPort 1.2a / 1.3 support I'm sure.

Right around the same time their G-Sync scalars support the HDMI 2.0 that their 900-series video cards tout, I gather.

SwissArmyDruid
Feb 14, 2014

by sebmojo

calusari posted:

It's not fair to compare the power consumption of midrange cards like the 970/980 to the R9 380X. The 980Ti/Titan II or whatever has a 250W TDP after all.

No, it's entirely fair. The 390X is still on the horizon, and THAT will likely be what contends with the GM200 Titan in that space.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Lowen SoDium posted:

So between G-Sync and Adaptivesync, is one of them technically superior to the other? I mean, they both do the same thing, right?

G-Sync is technically superior. As it has a frame buffer on the monitor side, it can continue to refresh the last image indefinitely. With AdaptiveSync, as no frame buffer exists, after a while, it HAS to grab a new frame if the framerate gets down too low, as the effect of sitting on a frame for too long is a gradual washing out (on the scale of dozens of milliseconds) to white. If left unchecked, this would demonstrate itself to the human eye as flickering, and that is worse than any kind of screen tearing. Therefore, yes, monitor makers will program their scalars so that if it really absolutely NEEDS it, an AdaptiveSync display WILL still tear before it flickers.

You can see an example of the tearing here: https://www.youtube.com/watch?v=hnBmjN-GVuw

Still better than no variable refresh rate, though. I suggest watching the entire video.

SwissArmyDruid
Feb 14, 2014

by sebmojo

veedubfreak posted:

Imagine how awesome cpus would be if you could buy them without all the gpu bullshit on them.

You can. They use X99 motherboards, and cost an arm, leg, and spleen.

SwissArmyDruid
Feb 14, 2014

by sebmojo

sauer kraut posted:

It's gonna be interesting to see how well GT4e/128MB cache (insert XBone joke here) fares with decent DDR4.
The <150$ graphics card segment could just disappear.

VP9 hardware support is also pretty nifty. Will that work for playing Youtube videos in Chrome? I spend so many hours a day doing that.

I foresee AMD proceeding to take that segment right back away from Intel with the release of their first HBM products. For their Iris Pro products, Intel embeds 128 MB of eDRAM as, I believe, shared between graphics and victim cache for L3 on the same package. (Technically making it L4 cache.) This makes it faster than system memory because it's still right next to the die.

With HBM, I think there's the possibility (since it's stacked on top of its memory controller) that we could see a single added layer (I believe first-gen HBM has a capacity of 256 MB per layer? ) of silicon on top of the processor itself, shared between graphics and L3.

This is very exciting news when you consider that AMD's APUs are hitting roadblocks because of how bandwidth-and-latency starved they are with DDR3, forcing them to limit themselves to just eight compute units in Kaveri.

Add in the work that they're already doing with their whole OpenCompute and HSA, and things REALLY get interesting when you can access your L3/graphics memory right there on-die.

SwissArmyDruid
Feb 14, 2014

by sebmojo

sauer kraut posted:

Going for quality of life/silent operation and strict TDP limits like Intel does for CPUs was absolutely the correct choice.
If AMD go full steam ahead with their Pirate Island stuff they're gonna look like fossils.

There's speculation that Arctic Islands in 2016 (which comes after PI) is supposed to be a veiled hint at a generation where they tweak for efficiency.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Factory Factory posted:

And once it comes out, they'll get about a year of market parity with Nvidia before Team Green comes out with the next Big Thing That Makes More Money Than Gaming GPUs and changes everything again.

Well, stacked memory was *supposed* to be Pascal's thing. But since HMC went nowhere and they're having to use AMD-Hynix's HBM, I think parity will last a little longer than that.

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Dude, AMD is [soon to be] dead, just let it go.

AMD basically wraps it up, 2017. I'll toxx on this if you want to set the rules because I'll gladly eat a ban to be wrong, but I'm pretty sure I won't.

Well, my gut feeling is that AMD as a whole company is on the turnaround. I feel that their CPU products have been waiting on technology to catch up with their ideas, and that their GPU products are fine. Not exceptional, but fine. But really, whether or not you eat a ban is entirely up to you. Don't let me stand in the way of your committing sudoku.

Really though, My investment in seeing AMD survive is purely from the "everyone wins when there's competition" angle. Don't tell me that NVidia would keep pushing their R&D to make better tech to get ahead of everyone else if AMD suddenly ceased to exist. That's not how for-profit companies work. I'll buy either company's cards, depending on which provides me the best price/performance ratio. To date, there have been no vendor-specific features that I have absolutely had to have. In flipping back and forth between ATI/AMD and Nvidia every new card (that's just how it winds up) my only preference has been that AMD handles switching and activating/deactivating multiple monitors better than Nvidia, because they put all those controls right there in the system tray icon.

But really, you should be rooting for AMD too, because otherwise we'll just have another Intel lazily incrementing their process tech instead of looking for that next big technological advancement.

SwissArmyDruid fucked around with this message at 09:45 on Feb 5, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

It would just be nice if AMD would get with the process stepping, aren't all their CPUs and GPUs on 28nm processes?

That's not AMD's fault, that's TSMC's. They've had their 16nm process delayed again and again. and their troubles with their 20nm process means we probably won't see any of those, either. Both AMD and Nvidia are looking to jump to the next node past 20nm already. AMD has moved their 28nm parts over to GloFo's 28nm SHP to squeeze out a bump in clock speeds in the meantime.

GlobalFoundries recently entered into a partnership with Samsung to license their 14nm FinFET process. It's rumored that the partnership has yielded enough capacity that AMD could just use GloFo exclusively for all their parts. AMD is scheduled to at least start using GloFo's 14nm LPP process by the end of the year, though. (Sampling Zen, perhaps?)

Let's be clear, GloFo can offer 14nm LPE process chips now, but AMD is ostensibly waiting on LPP to come online, because it has better performance in a couple of areas.

Nvidia is sticking with TSMC (presumably because they don't have as nice a relationship with GloFo) and are still slated to use their 16nm process. But unless something happens with TSMC's 20nm process that dramatically changes things, expect more 28nm parts out of both of them.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Boiled Water posted:

It is at least partly their fault since they sold off their fabs following a long line of terrible business decisions.

That chestnut is so goddamn old as to be completely irrelevant to the current discussion, and even then, it does nothing to explain TSMC's woes at 20nm.

AMD selling off spinning off their foundries into GloFo is not the goddamn problem here, why the gently caress would you even bring that up for any reason other than grasping at straws because you don't understand a single goddamn thing about what I just said?

That's like jumping in on a very intricate discussion about the subtleties Greek philosophy and the underpinnings laying the foundation for thought going forward and loudly yelling, "GREEKS WERE BRAIN DAMAGED BY THE LEAD IN THEIR PIPES". A fact, to be certain, but *nothing to do with the goddamn topic*, and you just draw dirty looks from the adults talking. And the worst part is that you're just standing there with your chest puffed out and your fists on your hips, looking proud of yourself, completely oblivious!

You've got loving Google at your fingertips. If you're going to try to participate in a discussion about something you don't know, at LEAST put some freaking effort into it.

SwissArmyDruid fucked around with this message at 20:59 on Feb 5, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

Boiled Water posted:

Having directly control and oversight over your production would certainly help mitigate or resolve problems before it becomes a problem. See also: Why Intel don't have this problem.

Intel touts tardy Broadwell Core CPUs for laptops, PCs: http://www.theregister.co.uk/2015/01/05/intel_broadwell_u_launch/
Intel’s 14nm 'Tock' Dilemma: http://www.eejournal.com/archives/articles/20141204-intel14nm/
London Calling: Intel's 14-nm process delay: http://www.eetimes.com/author.asp?doc_id=1266255
Is the sky falling for Intel’s 14nm Broadwell?: http://semiaccurate.com/2014/02/19/sky-falling-intels-14nm-broadwell/

I'm sorry, what? Even taking into account that Intel delayed Broadwell's release to allow their customers to clear out Haswell silicon, it doesn't change the fact that Intel's own 14nm process was delayed by two quarters, and that they canned their plans for a 14nm Arizona facility.

Again. Your statement (I hesitate to even consider calling it an argument or even discussion point) is irrelevant to the current situation, because AMD doesn't have their own fabs, and process delays can happen to ANYONE, not just people who don't own their own fabs.

And lest you forget, it is not just AMD here. Nvidia has never had fabs. They would still be in the same situation regardless of if AMD still owned GloFo.

SwissArmyDruid fucked around with this message at 22:16 on Feb 5, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

HalloKitty posted:

Upgrade from 270X to 960? That's not worth it; not a big enough performance increase.

Agreed. A 970 will last you longer, even with the gimpy last half-gig.

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Here's a question - exactly how much would say, a R9 370X have to outperform a GTX 970 for many to consider it, where the power consumption and heat are pointless factors compared to performance. IIRC, the 380X leaked benches indicate better performance than the GTX 980, so it looks like the 370X will be competing with the 970.

Does it have to be equal? Or can they manage 2:3 ratio between performance/consumption and it'd still be worth it?

If power consumption, heat, and noise are pointless factors compared to performance, then whichever one delivers a better performance/price ratio once shipping and taxes are included is the one I'll buy.

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo

AnandTech posted:

DirectX 12 will only be available on Windows 10. Windows 8/8.1 and Windows 7 will not be receiving DirectX 12 support.

Well, here's hoping that all the keylogging and tracking stuff isn't carried over to release, so I can actually be comfortable using it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply