Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The_Franz
Aug 8, 2003

redstormpopcorn posted:

I think the PC is going to get many more, much better console ports this generation since both of the performance-oriented consoles are literally running on PC hardware. I wonder if we'll see AMD release cards with an APU and 6+GB GDDR5 with the advertising slant of "just drop a PS4 in your PC!"

The next generation of APUs coming next year will support a GDDR5 memory controller so having a unified GDDR5 memory pool will be possible. The catch is that any GDDR5 equipped boards are going to be BGA integrated boards that can't be upgraded.

Adbot
ADBOT LOVES YOU

The_Franz
Aug 8, 2003

Agreed posted:

This would be an awesome point for the thread's resident industry dev person to come in and expand on how efficiently render calls are on the consoles vs. the overhead introduced by an OS, how highly parallel they can actually be without so much poo poo in the way of scheduling... Paging, paging?

Is there any advantage anymore? The new consoles aren't the unitaskers of previous generations. They are running real operating systems (Windows/FreeBSD) with enough background tasks to justify reserving gigs of memory for the OS and they seem to be running actual compositors or windowing systems judging from how you can multitask while jumping in and out of games as opposed to the current method of reserving a millisecond or two to draw some GUI elements over the frame before the buffer flip.

The_Franz
Aug 8, 2003

Factory Factory posted:

The point of a console OS and a multicore CPU, whether divided logically like the PS4 or virtualized like the Xboner, is that you can devote a fixed set of resources to unitasking even while having stuff left over for multitask stuff. It does matter and there is an advantage. Windows by itself doesn't have a mechanism to guarantee resource availability in the same way.

True, I was addressing the issue of console OSes no longer being "lite" in the sense that they now take more memory and may have even more going on in the background than an idle desktop OS and GPU operations being more expensive since they are now dealing with multiple GPU contexts and windowing systems as opposed to having exclusive GPU access in the past.

The_Franz
Aug 8, 2003

Factory Factory posted:

And even so, console OSes are "lite" in the sense that they offer bare-metal hardware access in a way that programming through DirectX on Windows does not. There is very little abstraction from the hardware, and optionally none. You just don't get that in Windows, even with OpenGL.

Even on the PS3 and 360 you never really had total access to the hardware. You were still running on top of a hypervisor, a kernel with a task scheduler and an OS layer that needed a certain amount of time on a specific core(s). You had almost exclusive GPU access, but your completed frame still went through the OS so it could draw GUI elements on top of it. The PS3 had even more overhead since the disk encryption made it impossible to do unbuffered IO. I think the last platforms where you could boot it up and basically have nothing else between you and the hardware was the PS2 or GameCube.

The PS4 and Xbone are still built for games, but no matter how you look at it there is a lot more overhead this time around as it's just the nature of the beast when you want proper multitasking, the ability to record footage and the ability to watch your cable box in a window next to your game.

The_Franz fucked around with this message at 03:21 on Sep 24, 2013

The_Franz
Aug 8, 2003

Schpyder posted:

I think they actually could, and AnandTech makes the same point I was going to before I even loaded that article to read some more about Mantle:

It's the new consoles.

Low-level APIs are de rigueur in the console space. Since AMD is supplying the CPU & GPU for the XB1 and PS4, and since they're both based on GCN, if they keep Mantle as similar as possible to the APIs they provide on the new consoles, then any dev doing cross-platform development has very little work to do to enable Mantle support. And in that regard, their PC market share is largely irrelevant.

Mantle isn't going to be on the consoles. Microsoft has flat out stated that the only API available on the Xbone, like it's predecessors, is D3D. The PS4 already has a proprietary PS3-esque low level API along with an OpenGL implementation and apparently a D3D compatibility wrapper built around that to ease porting. Mantle is Windows and *nix only. Even then, AMD's OpenGL guy has stated that the performance difference between Mantle and a properly-implemented modern OpenGL rendering system should be minimal so whether it's really worth targeting this proprietary API remains to be seen.

The_Franz
Aug 8, 2003

Yaos posted:

So this may be the big thing Nvidia was announcing, Nvidia G-SYNC.
http://blogs.nvidia.com/blog/2013/10/18/g-sync/

Using a chip in the monitor, and a GeForce card ( :argh: ), the monitor provides a variable refresh rate that eliminates screen tearing without using V-Sync and eliminates stuttering. It works with most GeForce cards. According to the Anandtech live blog it will be available in Q1 2014.

This will be incredibly awesome for applications like video playback and emulation where you are constantly fighting to keep the audio and video in sync due to clock drift or the refresh rate of the output display not quite matching, or even being way off from, the source material. No more weird timing tricks, triple-buffering, occasional skips, tearing or resampling audio to keep everything in sync.

This needs to be in every monitor and TV. Right now.

The_Franz
Aug 8, 2003

Arzachel posted:

Mantle isn't going to be used by Nvidia much like Physx isn't going to be used by AMD. What AMD are betting the farm on and what differentiates Mantle from Glide, Physx, etc. is that the Api would be implemented in the bigger engines to be closely compatable with the console paths, thus porting a multiplat to Mantle would have a low oportunity cost, since you've already written the code once. This rarely ends up as straightforward in practice, so well have wait and see, but I'd say Mantle has a far greater chance to stick than GPU accelerated Physx ever did.

Except the consoles don't use Mantle or anything resembling it and probably never will. You have DX11 on the Xbone and the PS4 lets you use OpenGL and whatever Sony's proprietary API is. Right now, you have to write a Mantle render path strictly for AMD PCs.

The_Franz
Aug 8, 2003

Blorange posted:

I was under the impression that Mantle exists not to make the GPU run faster, but to make the CPU more efficient in executing calls to the graphics architecture. If your graphics card is already maxed out you're not going to see much of a benefit. On the other hand, if your CPU is capping your FPS, you could render far more discrete objects on the screen or get the boost to hit the 120hz mark. If this is the case, only people with twinned+ high end cards are really going to see the difference.

Their big claim is that it has significantly reduced draw call overhead vs Direct3D. I want to see a benchmark of Mantle vs a modern OpenGL pipeline as I suspect that the differences will be much smaller, if they exist at all.

The_Franz
Aug 8, 2003

Incredulous Dylan posted:

What's confusing me about G-synch is what it offers in practical benefits over just having a 120 hz monitor. I've had one for years for 3D gaming and I can't remember the last time I actively saw tearing in a modern game. Maybe I am just just used to tearing but I thought if I wasn't cranking fps around in the 120 area I wouldn't need to worry. I always have the quality on max which keeps things in most modern releases at 60-80 fps.

For games it means that you won't see tearing or drop to 30 fps if you card can't maintain 60fps.
For video playback it means that you no longer have to choose between tearing, audio resampling or the occasional missed frame due to the audio and video clocks not being in perfect sync.
For more niche applications like emulation it means that you can emulate systems that don't run exactly at 60hz properly. Even arcade machines that run at oddball refresh rates like 53hz will be silky smooth and tear-free.

The_Franz
Aug 8, 2003

BurritoJustice posted:

DICE says 10% increased development schedule for up to 20% GPU performance increase.

Those figures aren't AMD vs Nvidia, but rather Mantle vs D3D. Once the API is actually publicly released we should be able to see some benchmark comparisons between an optimized AMD path with Mantle and an OpenGL path that uses the Nvidia extensions.

The_Franz
Aug 8, 2003

beejay posted:

I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it.

The problem is that one benchmark doesn't really tell the whole story. We don't know if the Mantle renderer in Frostbite had further optimizations beyond replacing API calls, nor do we know how Mantle compares to APIs other than D3D. Valve saw similar gains from moving to OpenGL from D3D, so until AMD releases the API publicly and some open tests are conducted, all we can do is guess.

The_Franz
Aug 8, 2003

Factory Factory posted:

TechReport has a little recap of some Game Developer Conference sessions that might be of interest. Bottom line: it looks like the DirectX and OpenGL folks saw Mantle and want to do that poo poo.


Emphasis added.


Meanwhile, OpenGL's session is about reducing driver CPU overhead to zero and features folks from AMD, Intel, and Nvidia together.

OpenGL has actually had things like multi-draw-indirect for a few years now which lets you concatenate thousands or even tens of thousands of draw calls into one. People are only just noticing now because Valve has started dragging the entire industry away from it's Windows-centric viewpoint.

What will be really nice is when all of the vendors finally support ARB_BINDLESS and concepts like texture units and texture array limitations become a distant bad memory.

The_Franz fucked around with this message at 19:33 on Feb 26, 2014

The_Franz
Aug 8, 2003

Malcolm XML posted:

It's a lot less valve and a lot more iPhone that's been the drive toward opengl

the issue is that ensuring that extensions are supported on all of your client platforms is a complete pain. DirectX at least lets you know that if you program to DX11, you get all of DX11 features.

I suspect we'll see implementations where one is optimized for XB1/DX and one for whatever PS4 runs (Mantle?) before we see people explicitly optimizing for OpenGL on the desktop.

OpenGL and OpenGLES on mobile devices are two different beasts. Most of the techniques and optimizations they've been discussing at recent developer conferences aren't applicable to ES. Fragmentation isn't really a problem as long as you pick a core spec and stick to it. It only becomes an issue if you really want to use some of the bleeding-edge or vendor-specific features.

None of the consoles support Mantle and probably never will. The XB1 strictly uses D3D and the PS4 has it's own proprietary low-level API similar to the PS3 as well as an OpenGL implementation (and I think a D3D emulator on top of that) for more easily porting games that aren't bleeding-edge AAA titles. On the desktop, more and more companies are already starting to target modern OpenGL targets. Some slides during a presentation at Valve's dev conference gave away that Epic has been working with Nvidia to optimize the OpenGL backend in Unreal Engine 4 and both AMD and Nvidia have been improving OpenGL support in their profiling tools as well as porting them to Linux, so there is definitely some developer demand for it.

The_Franz
Aug 8, 2003

deimos posted:

Specially since they are pushing for 9 to replace XP. I don't mind the shorter dev cycles if it means more cutting edge features as long as they don't gouge for updates, hopefully MS makes 9 a sub $60 upgrade.

It still won't help with the China problem, where a whole lot of people are using modern hardware but running XP.

Jan posted:

And it's not like this is frilly stuff that developers and users won't care about. Mantle is the proof that there is a demand for more efficient drivers and APIs. The current console generation's focus on multiple cores forces developers to multithread everything, including rendering. This can be done efficiently with the PS4's command lists and, to an extent, the Xbox One's deferred contexts. But the same deferred contexts are a net loss by default on PC because every single draw call carries so much drat overhead. The only way around this currently is to have AMD or nVidia hand code optimizations for every game that decides to multithread its rendering.

Nvidia's been championing multi-draw-indirect + uniform buffer objects on OpenGL for a while now. Rather then multithread things in the driver to handle applications making lots of individual draw calls, you just build a big list of geometry in the application and then send it off to the video card with one call.

The_Franz
Aug 8, 2003

Factory Factory posted:

Hey, API nerds: Valve just released their DirectX 9.0c-to-OpenGL code open source. It's a mostly-complete DX9.0c compatibility layer meant to be compiled directly into the game binary, for use in porting games to Linux or Mac OS.

The source for their vogl OpenGL profiler/debugger was released too.

The_Franz
Aug 8, 2003

OpenGL 4.5 was announced at SIGGRAPH today. They finally made direct state access core and introduced a couple of new extensions (context flush control and some D3D11 feature emulation), but nothing too terribly interesting.

The big news is the official announcement that they are working on a new clean-slate API meant to compete with Mantle/Metal/D3D12. Hopefully they actually pull it off this time, unlike with OpenGL 3, since having an efficient, modern API that isn't tied to one hardware vendor or platform will be a great thing.

The_Franz
Aug 8, 2003

Lowen SoDium posted:

Regarding AMD and an equivalent to ShadowPlay:

The Steam beta client just updated with this in the change log:


Looks like AMD had some hardware encoding capabilities.

Some of their APUs have had a hardware video encoder since 2012 and the SDK to use it has been available since last year, but AMD really didn't go out of their way to advertise it for whatever reason.

The_Franz
Aug 8, 2003

This bit of Rich Geldreich's rant from a few months ago bears repeating:

quote:

AMD can't update its driver without breaking something. They will send you updates or hotfixes that fix one thing but break two other things. If you single step into one of this driver's entrypoints you'll notice layers upon layers of cruft tacked on over the years by devs who are no longer at the company. Nobody remaining at AMD understands these barnacle-like software layers enough to safely change them.

...

This could be a temporary development, but AMD's driver seems to be on a downward trend on the reliability axis. (Yes, it can get worse!)

On that note, it looks like Valve has started publicly tracking issues with AMD's Linux drivers. Maybe it will publicly shame them into fixing those little issues like certain titles showing only a black screen or crashing on startup.

The_Franz fucked around with this message at 17:38 on Sep 17, 2014

The_Franz
Aug 8, 2003

Hace posted:

If you only have 1080p then you're already going insanely overkill with a 970, just stick with that.

A lot of Ubisoft games just barely stay over 60fps at 1080p and max settings on these high-end cards.

The_Franz
Aug 8, 2003

How is a setting level called 'ultra' that actually requires really high-end hardware the result of lazy developers? If anything it shows that they actually care enough to put ridiculous settings in there for people with really high-end hardware.

When Doom 3 came out 10 years ago the ultra texture setting was basically unusable as it needed something like 512 megs of video ram at a time when 128 was considered normal.

The_Franz
Aug 8, 2003

Jan posted:

As I have said time and time again, SLI (and Crossfire) is a black box that game developers have absolutely no access to. The only thing devs can do that is directly SLI-related is get the GPU count. Beyond that, there apparently are a few general considerations with regards to dependencies between render targets and avoiding early sync of select few GPU queries. But since there are no tools to speak of that let devs profile SLI bottlenecks, the bulk of the work of getting SLI to work is pretty much always engine-specific driver-side hacks from nVidia/AMD's part. nVidia does send teams of developers on-site to help with nVidia-specific development, but from my personal experience that was more to implement their own fancy features (TXAA, VisualFX, etc.) than actually optimizing developers' poo poo.

This is one of the things that the new APIs should provide a massive improvement on. Mantle provides application-level control over multiple discrete GPUs and I would imagine that gl-next and DX12 will do the same.

The_Franz
Aug 8, 2003

1gnoirents posted:

Lol, yeah lets not confuse a 850*C heat gun for a hairdryer

edit: side note open to tips for overclocking my hairdryer

Seriously, modern BGA components are attached to boards using ovens and hot air guns that heat the whole thing up enough to melt the solder balls between the chips and the board and the lead-free solder in use now needs temperatures upwards of 220 degrees C to melt. You aren't going to do anything by blowing a hair dryer onto a video card.

The_Franz
Aug 8, 2003

AMD has also made quite a lot of information about the GCN architecture public. I don't think Nvidia has published anything equivalent about Kepler or Maxwell.

The_Franz
Aug 8, 2003

Jan posted:

There have been claims that Mantle allows manually controlling Crossfire, which I am certain is bullshit but no way to know for sure.

Straight from the horse's mouth:



I wouldn't be surprised if GL-next and DX12 had a similar level of control over multi-GPU scenarios since developers want it and it means that the IHVs wouldn't have to spend time building support into their drivers for every application.

The_Franz
Aug 8, 2003

Anti-Hero posted:

I'm using Mordor as the benchmark for the next gen mutliplat games. It runs very well on a wide variety of systems, but could absolutely take advantage of >4GV VRAM for higher resolutions. lovely porting of other titles non-withstanding, that's just the way things are headed.

People seem to run Mordor with high textures on 2 gig 750Ti cards without problems. Frame rates vary with vsync off when the game has to swap textures to and from system memory, but it's still a stable 30fps with vsync on since the swapping doesn't cause that much of an FPS hit.

People need to remember that modern games tend to treat VRAM like operating systems treat system RAM. They cache textures and don't unload them if they don't have to since unused RAM is wasted RAM and there is a good chance they will be needed again. If the VRAM is full and the game needs to load a new texture it's faster to just overwrite an old unused one versus going through the full texture allocation process. Just because a tool tells you that a game has 3.5 gigs of textures allocated doesn't mean that it's actually using everything that's loaded.

When new titles actually have problems running on 3 or 4 gig cards, super-high-res texture add-ons and settings called 'ultra' notwithstanding, then worry about VRAM amounts.

The_Franz fucked around with this message at 20:56 on Oct 31, 2014

The_Franz
Aug 8, 2003

Jan posted:

Don't fool yourself. When nVidia comes on site, they don't give a poo poo about optimizing the game. They're only there to push their API exclusive features.

Optimization comes later when they put in special driver paths and rewrite and inject their own optimized shaders over the game's so they can release a new driver that touts a 20% performance increase without any of their optimization efforts going to help the competition.

The_Franz
Aug 8, 2003

cat doter posted:

It depends on the TV, but 1080p doesn't really mean all that much when it comes to resolution settings for TVs. For one thing, some have built in over/underscan that you can't disable which will throw off the pixel mapping and give you blurry text/games. If you can find a setting that changes the pixel mapping or disables and overscan then it should fix the sharpness problems. If it's a cheap TV though and it doesn't let you do that, you're poo poo outta luck.

I actually had the same problem and my only solution was to switch to a VGA cable and made sure the TV was set to a proper native 1080p signal using a program called custom resolution utility. VGA cables don't output the same clean digital signal as HDMI does but if the resolution is set correctly it can have a massive improvement over hosed up looking fuzzy HDMI.

How old is your TV? I don't think I've seen a 1080p TV made in the last 5 or 6 years that doesn't default to 1:1 pixel mapping when feeding it the native resolution via HDMI. Even the older 1366x768 displays I've seen do 1:1 mapping when feeding them their native resolution.

The_Franz
Aug 8, 2003

kode54 posted:

At least with my Asus VG278H, when I made the mistake of hooking it up to my Radeon R9 270X with an HDMI cable instead of DVI-D, the Windows drivers as well as OS X defaulted to feeding it YCbCr 4:2:2 instead of RGB. OS X needed an EDID override to remove the TV attribute, Windows just needed Catalyst Control Center to be reconfigured to output RGB.

On top of that, it's really annoying how AMD's drivers still default to 10% underscan when using the HDMI connection, even when using the display's native resolution. I could understand doing this 10 years ago when most people had CRTs and projection TVs with massive overscan, but, with the exception of crappy bottom-shelf chinese displays, TVs default to 1:1 pixel mapping these days.

The_Franz
Aug 8, 2003

Jan posted:

The best part is how in some cases, it loves to forget that you changed that setting in the CCC and resets to underscan. Reboot? Reset. Change resolution? Reset. UAC prompt? Freaking reset.

:psypop:

Had that issue on one of my studio's workstations and ended up finding a registry fix after scouring Google for an hour.

The sad thing is that fixing this is probably just a matter of changing one or two lines that set the default value, but for whatever reason they just won't do it. I can't imagine that people really want this behavior since neither Nvidia nor Intel do it by default.

The_Franz fucked around with this message at 15:15 on Dec 2, 2014

The_Franz
Aug 8, 2003

This is interesting. Out of nowhere, Nvidia basically built their own "next-gen" API on top of OpenGL with the giant extension NV_command_list.

http://www.slideshare.net/tlorach/opengl-nvidia-commandlistapproaching-zerodriveroverhead

It's not perfect as being built on legacy OpenGL means that it still suffers from nonexistent multithreading support when it comes to actually submitting commands or updating buffers, but it's probably as good as a legacy API is going to get.

The_Franz
Aug 8, 2003

HalloKitty posted:

New AMD driver is coming soon.

Handful of new features and fixes. Anyone jealous at DSR in NVIDIA cards now gets the same on AMD cards, for example.

EDIT: Ah, nevermind. Some news sites made it sound like the actual drivers were posted early and then taken down.

The_Franz fucked around with this message at 16:18 on Dec 5, 2014

The_Franz
Aug 8, 2003

Party Plane Jones posted:

Changing the scaling settings in CCC should have fixed it; underscan on HDMI cables for AMD cards has been around for at least a couple years. It's definitely hugely annoying.

There's a registry fix to do this permanently for all resolutions.

http://blog.ktz.me/?p=323

The_Franz
Aug 8, 2003

Oh look, AMD finally fixed their driver so it doesn't underscan on the HDMI port by default.

http://support.amd.com/en-us/kb-articles/Pages/GPU-5001.aspx

The_Franz
Aug 8, 2003

Don Lapre posted:

Staples price matches.

All of the $100-ish cards on the Staples website are horribly outdated and/or lowest of the low-end junk that are actually a downgrade over a newer Intel GPU. Rewards points or not, it's a complete waste of money when you can get a GTX 750 or an R7 260 for about $100 on Newegg or Amazon.

The_Franz
Aug 8, 2003

Factory Factory posted:

3) A GeForce 760-ish card really can't take much advantage of more than 2 GB of RAM while maintaining ~60 FPS by itself. Like, here's some benchmarks testing Far Cry 3. At 4K, the extra VRAM makes a 500% difference - from 2 FPS to 10 FPS. It's clearly a 1920x1080 card, and 1920x1080 only broke the >1 GB benchmark consistently a year or two ago.

It does prevent frame time spikes when games just assume that they can have 3+ gigs of textures cached at one time and 2 gig cards end up swapping textures to and from system memory.

The_Franz
Aug 8, 2003

sauer kraut posted:

That's just Ubisofts shitness.

http://www.guru3d.com/articles_pages/dragon_age_inquisition_vga_graphics_performance_benchmark_review,9.html
You scratch 2GB on 1080p, on a ultra setting that will cause 980's to dip below 60fps.
Yeah it's not enough for grognarded uncompressed Skyrim 4K texture packs, who cares.

Except there are already a growing number of games, not just lovely Ubisoft titles that run terribly on everything, that want 3+ gigs of VRAM if you want textures equivalent to the consoles. Cry all you want about crappy ports or lazy developers, but it is what it is and it's not going to get better.

Looking at current cards that have 2 and 4 gig models available, the price difference is minimal. Some of the 4 gig variants of the R9 270X are actually cheaper than the 2 gig versions from the same manufacturer.

The_Franz
Aug 8, 2003

Beautiful Ninja posted:

It does seem to be particularly bad in AC: Unity in the benches I've seen, wonder if the 128-bit bus is specifically hampering it in that game, or if it's something like driver issues or Ubisoft being incompetent as gently caress.

AC: Unity runs like total garbage on everything. Even on medium at 1080p it takes a 980 to hit 60fps most of the time and anything below a 970 will see dips below 30. It can't even maintain 30fps on consoles where they can tune the settings for a fixed hardware spec.



Ubisoft games are just badly programmed and optimized in general and shouldn't be considered the norm. See also: Far Cry 4 refusing to start on systems with less than 4 cores even though people have shown that it runs fine on 2 with a hack to remove the check.

The_Franz
Aug 8, 2003

NVidia spoke on the issue:

http://techreport.com/news/27721/nvidia-admits-explains-geforce-gtx-970-memory-allocation-issue

quote:

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.

The_Franz
Aug 8, 2003

Subjunctive posted:

Are people able to trigger this in games? I hadn't seen anyone link it to content behavior yet.

If you look in the Tech Report article they have some real world usage numbers relative to a 980 when you exceed the 3.5GB threshold. The relative performance drops are only a couple of percentage points more on the 970 than the 980.

Adbot
ADBOT LOVES YOU

The_Franz
Aug 8, 2003

Fauxtool posted:

I get why people are mad. Its still a great card for the good price right?
Is there any possible outcome to this beyond Nvidia saying "oops sorry"

Is this something that will only affect people trying to play current games at 4k or will it have negative effects at low resolutions too? I imagine that you gotta really push it to notice anything and not a lot of people are doing that and will never notice.



To use a car analogy. I dont think if I would care if my car had 50hp less than advertised if it could still hit the same 0-60 time in the brochure. If that error resulted in a lower resale value i would be upset and I can see something similar for the card. Even though its a great card you are getting less than you paid for and thats a bummer

It seems that in games the worst case scenario is a 3-4% performance drop since the vast majority of your resources are still going to be in the fast region. If you are doing compute work on it then it might be a bigger deal since doing operations on data in the upper region will see a massive performance hit.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply