Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
repiv
Aug 13, 2009

Incredulous Dylan posted:

Man, I was eyeing one of those 4K ROG Swift monitors, just accepting that I wouldn't really be gaming on it at 4K with the 780 ti. Now I want to sell the 780...

The Swift is 1440p, not 4K.

Probably for the best, what kind of hardware would you need to drive 4K at 144hz :gonk:

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

LiquidRain posted:

I image AMD can spare an extra framebuffer in VRAM and push that to the monitor if 1 frame is taking too long.

The problem is you don't know how much of a window you have to push a repeat frame - if the real frame is completed before you finish you either have to abort the repeat frame (causing tearing) or force the real frame to wait (meaning you see the same frame for two full 1/144=7ms intervals when the frame may have only taken 10ms to render).

The RAM on the G-Sync module means they can stream in fresh frames and draw repeated frames in parallel, without repeat frames ever hogging the line.

repiv fucked around with this message at 14:18 on Mar 23, 2015

repiv
Aug 13, 2009

LiquidRain posted:

Same problem exists if you self-panel-refresh by putting the memory on the monitor. You'll be doing some form of frame pacing when you get that low.

The same problem exists but there's more than they do to alleviate it, since they're not bound to displaying frames at the exact time they're received.

The 768MB RAM on the G-Sync board is enough to buffer about eight 1440p frames, so if they receive a new frame while re-drawing an existing one they could place it in a queue and use slight time-warping to catch up without having to tear or skip a frame.

(I don't know if that's what they're doing, but that's a lot of RAM to put on there for no reason...)

repiv
Aug 13, 2009

Kazinsal posted:

You can fit 54 uncompressed 2560x1440x32 frames in 768 MB. They're a bit over 14 MB each.

Yeah I converted bits to bytes twice. oops

repiv
Aug 13, 2009

suddenlyissoon posted:

Are the 970/980 really receiving only a gimped version of DirectX12 or have the children of NeoGAF finally melted my brain?

This is the specific post

I wouldn't worry about this too much, the most important features of DX12 are changes to the software model and will work identically on any tier of hardware.

repiv
Aug 13, 2009

FaustianQ posted:

Is this confirmed somewhere? Because if so then maturity on Freesync makes Gsync pointless - no one is going to be playing games or watching things at 9 frames per second (no one sane).

People keep tossing around that FreeSync supports 9-240hz, but that's kind of misleading. You get to choose one of these ranges: 36-240hz, 21-144hz, 17-120hz or 9-60hz.

Still theoretically could outperform G-Syncs 30-144hz, but in practice that hasn't happened yet.

repiv
Aug 13, 2009

SwissCM posted:

17-120hz is all you really need anyway. Once a monitor comes out with those kinda specs it's pretty much game over for g-sync.

AMD are using the "wouldn't that be great?" method of system design, by setting a goal that may not be achievable and making other companies take the fall if it turns out not to be.

I hope FreeSync does well for the sake of competition, but these superior specs mean nothing without a proof of concept implementation to back up their feasibility.

repiv fucked around with this message at 20:51 on Mar 23, 2015

repiv
Aug 13, 2009

FaustianQ posted:

Likely dumb question, but what are the baseline requirements for Vulkan to work, any idea? It doesn't seem GCN or DX11/12 bound so...would it work with a ye olde GTX280 or HD4870? A Geforce FX5950?

I don't think AMD or NV have said anything concrete about Vulkan support, but it's pretty safe to assume that cards being updated to DX12 will also get the Vulkan treatment. They seem to be targeting a similar baseline although it's hard to be sure since neither spec is public yet.

That means GTX400+, HD7000+/GCN APU and Haswell Iris and Iris Pro chips are a sure thing.

HD5000/6000 has the same feature level as HD7000+, but AMD aren't supporting DX12 on VLIW chips so Vulkan probably isn't happening either.

repiv fucked around with this message at 01:25 on Mar 24, 2015

repiv
Aug 13, 2009

e: im a dumbass who cant read charts

repiv
Aug 13, 2009

Not a refresh, but there's supposed to be a 980ti coming. Fully unlocked Titan X core with 6GB VRAM instead of 12GB.

e: Should have read the context, that's going to be slightly out of your price range :doh:

repiv fucked around with this message at 15:41 on Apr 11, 2015

repiv
Aug 13, 2009

Subjunctive posted:

I think the access pattern matters too; I wouldn't be surprised if some of the work for the "GTA V version" of the NVIDIA drivers are to tune for exactly that.

GTA is probably the worst case scenario for the drivers to work with, on most games they can track which assets get used the most and keep them in the 3.5GB section but GTA is streaming poo poo in and out constantly so it wont know what the gently caress.

repiv
Aug 13, 2009

BadAstronaut posted:

This has 4GB of VRAM though:
http://www.techpowerup.com/reviews/MSI/GTX_970_Gaming/32.html

Looks like that is the one I am gonna get, from the fps scores they achieved in their tests.

There's a quirk inherent to all 970s which makes the last 512MB of VRAM run much slower than the rest.

The driver tries to be smart and push less frequently used resources into the slow region, so it's not that bad unless you're pushing extreme resolutions and MSAA with SLi 970s.

repiv
Aug 13, 2009

Gwaihir posted:

That would make sense, I'd be really surprised if they didn't do that.

This Eurogamer article confirms the driver allocates the regions intelligently based on usage, so it's pretty safe to assume the Windows buffers that are only read once per frame for compositing would get shunted to the slow zone.

Engines which use virtual texturing or sparse voxel rendering could cause serious problems unless you capped them at 3.5GB, but Megatexture hasn't exactly taken the world by storm :v:

repiv fucked around with this message at 17:49 on Apr 20, 2015

repiv
Aug 13, 2009

Are you sure you need dual HDMI? You can just use a passive DVI to HDMI converter if you're only concerned with video, not audio or the more exotic features of HDMI.

repiv
Aug 13, 2009

Has there been a more confusing and meaningless naming scheme than AMDs current one?

They're selling VLIW5, GCN 1.0, GCN 1.1 and GCN 1.2 cards under the 200-series brand, with lower numbered cards often having newer generation architectures with more features than higher ones :eng99:

repiv
Aug 13, 2009

HalloKitty posted:

Although not the worst, it is a bit bizarre that 7xx and 9xx straddle each other, with the 750 being a Maxwell card, but the 760 and 780 not being Maxwell, but the 960, 970 and 980 being Maxwell.

There's some logic there: the 750ti is first generation Maxwell, which to the end-user is closer to the 700 series in features since it lacks MFAA, HEVC and HDMI 2.0.

repiv
Aug 13, 2009

Panty Saluter posted:

Is that something that could be fixed in a driver or is it baked into the silicon?

I think this is how it breaks down:

Decode: Only the 960 has dedicated HEVC decoding silicon, but the driver has a hybrid decoder for any other 1st/2nd gen Maxwell card which uses a mix of the GPU core and the CPU
Encode: All 2nd gen Maxwell cards have decicated HEVC encoding silicon, but there's no fallback for first generation hardware

repiv
Aug 13, 2009

Is the setting you changed Grass Quality by any chance? The higher settings are brutal on performance but they only kick in outside the city.

repiv
Aug 13, 2009

veedubfreak posted:

Horizontal motherboards are the way of the future.

Nah, 90 degree motherboards are the way of the future. Every manufacturer clone the Silverstone FT02/FT05 design TIA

repiv
Aug 13, 2009

You could already do all that with DX11 albeit with more overhead, so showing it on DX12 without stating the hardware it's running on tells us absolutely nothing :crossarms:

repiv
Aug 13, 2009

He means the general algorithms and techniques used in engines have improved in ways that didn't need new APIs, so it's not fair to use a snapshot from years ago to compare API versions.

For example SMAA was developed during the DX11 era but it works just fine on DX9 hardware, so a deferred DX9 or DX10 renderer written today would look much better than a vintage one which uses FXAA or worse.

repiv fucked around with this message at 22:25 on Apr 30, 2015

repiv
Aug 13, 2009

AFR with mis-matched GPUs is a recipe for disaster. Hopefully multi-adapter is nuanced enough to do more fine-grained workload splitting, like always rendering the main passes on the discrete GPU then applying post-process passes on the IGP.

repiv
Aug 13, 2009

Truga posted:

From what I could understand the dx12 change, it doesn't really see an extra gpu, instead it just sees an extra 1000 or whatever your gpu has processors. How this will work in practice with the gpu ram split between the cards remains to be seen, but if people can make engines that exploit this efficiently, multi gpu might end up being better than single gpu in pretty much every way.

If DX12 is anything like Mantle (and it probably is), it's actually the exact opposite. Under Mantle the engine sees each GPU as a completely distinct entity much like how OpenCL or CUDA do things.

Each GPU has its own command queue and any VRAM allocation only exists on one card, so the engine is responsible for splitting up the work and combining the results by issuing GPU-GPU memory copies.

They might use simple methods like AFR or something more clever where each card does different tasks, causing less duplication across the cards VRAM and allowing higher utilization than SLi/CF did.

So... DX12 might be amazing for multi-GPU but it depends. Implementations will range from "efficiently splits between GPU and IGP" to "chokes if the GPUs aren't matched" to "only uses the first GPU :effort:"

repiv fucked around with this message at 19:03 on May 1, 2015

repiv
Aug 13, 2009

Any 970 will work, but if the factory overclocks are different they will both run at the speed of the slower card by default.

repiv
Aug 13, 2009

More specifics on how DX12 multi-adapter works.

tl;dr there's three modes developers can use:
Implicit: Game sees one GPU, driver handles all load splitting as with SLi/CFX. Still requires driver profiles for good performance.
Explicit Linked: Game sees a single psudo-GPU with multiple command queues and memory pools, each card can seamlessly access the others RAM over the SLi/CF bridge or XDMA.
Explicit Unlinked: Game sees completely independent GPUs, allows mis-matched combinations like discrete+IGP or AMD+nVidia. Cross-GPU memory copies have to make a slow round-trip via the CPU.

So don't go mixing AMD and nVidia cards in your machine just yet, games that actually support that won't be the norm :v:

repiv fucked around with this message at 15:06 on May 5, 2015

repiv
Aug 13, 2009

If they use Explicit Unlinked at all, it will be IGP+GPU yes.

Nobody is going to buy AMD+NV setups until games can use it, and no developer is going to pay the performance penalty over Explicit Linked unless many users have mixed-vendor setups. Never going to happen IMO.

repiv
Aug 13, 2009

Microsoft can just tell them to cut it out for WHQL certification if they try any shenanigans.

Trying to get decent frame pacing with mis-matched discrete GPUs is problematic enough that nVidia shouldn't need to deliberately hinder it though :confuoot:

repiv fucked around with this message at 18:54 on May 5, 2015

repiv
Aug 13, 2009

Official specs for the mid-range R9 300 cards are out. Confirms that AMD are going to market ancient GCN 1.0 parts with no Freesync, TrueAudio or LiquidVR support as "new" cards :suicide:

repiv
Aug 13, 2009

HBM is neat and all, but it all comes down to what price point they can hit (taking into account inflated prices if HBM availability and interposer yields are poor).

If all they can manage in the $300-400 price range is sharpie-ing a "3" over the "2" on existing cards and hoping nobody notices, their marketshare freefall isn't going to stop :ohdear:

repiv fucked around with this message at 23:49 on May 6, 2015

repiv
Aug 13, 2009

Tanreall posted:

None of the OEM cards listed have HBM nor are they $300-400 dollar cards

I know, I wasn't referring to the OEM cards. Maybe they'll pull an amazing new core architecture out of nowhere but all signs are pointing to HBM being their only killer feature, so getting HBM parts to a price where people will actually consider them instead of a 970 is critical.

repiv
Aug 13, 2009

The 970 leaks before it was announced showed a short board, so I think that is the actual reference design. What's sold as a "reference 970" now is actually the 980 reference PCB with a 970 core on it.

repiv
Aug 13, 2009

Reference watercooling for a single chip card? What TDP are they up to now :stonk:

repiv
Aug 13, 2009

Another pic surfaced, seems it is a tube-connected radiator like the 295X2.

repiv
Aug 13, 2009

For what it's worth one of the leaks said the 980Ti will be clocked ~130mhz higher than the Titan X, but their overclocking headroom will be more or less identical so it hardly matters.

repiv
Aug 13, 2009

Not him, but AMDs drivers have some severe CPU bottlenecking issues on Project CARS. It uses DX11s optional command lists to achieve multi-threaded rendering, which AMD has never bothered to implement except as a one-off for Civilization V.

repiv
Aug 13, 2009

The sad part now I think of it is that the Xbox One runs the game perfectly, on an AMD GPU, on a weak-rear end CPU with DX11. Microsofts AMD drivers do multi-threading better than AMDs AMD drivers :eyepop:

repiv
Aug 13, 2009

beejay posted:

"Perfectly" on the Xbox one meaning sub-1080p and probably around 30fps of course

It holds 50-60fps on a 1.7ghz APU, and resolution is irrelevant since that doesn't increase CPU load. Meanwhile the PC version chokes before hitting 60fps at similar settings on a 4790K@4.5ghz.

It's the same GPU architecture, the same graphics API and the same performance but the later requires an order of magnitude faster CPU to pull it off.

repiv fucked around with this message at 00:49 on May 8, 2015

repiv
Aug 13, 2009

cat doter posted:

That's just in project cars, usually the R9 290 trounces the 660ti but of course you get instances like that where AMD performs poorly. It's probably a complex problem, perhaps the developers were only really given resources to optimise by nvidia, perhaps AMD's drivers are just borked for that game or perhaps nvidia maybe made it difficult for slightly mad studios to get help from AMD, but that's a bit too conspiratorial for me. The end result is the only thing that matters anyway.

Slightly Mad Studios posted:

We’re reaching out to AMD with all of our efforts. We’ve provided them 20 keys as I say. They were invited to work with us for years, looking through company mails the last I can see they (AMD) talked to us was October of last year.

:shrug:

repiv
Aug 13, 2009

Normally I'd agree, but with 2GB cards being the norm last generation I think it's worth abandoning that ship with games VRAM requirements going through the roof.

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

Nope, both of the fans on my MSI are definitely switched off.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply