Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I was doing some back-of-the-envelope math to figure out the core configuration of the supposed 8870 using this table on Wikipedia, and I'm getting roughly double a 7850: 2048 shaders, 128 texture units, but still 32 ROPs. A 256-bit memory bus using 6Ghz GDDR5, just like the GTX 680. The FLOP numbers that page gives seem about 9% low by my math, but I guess that's not that bad. The transistor count is down 1B from Tahiti, though I suppose they could have shaved that much off by the reductions in memory controller count, caches, and using simpler shader clusters with less DP throughput. Cramming that into 160W at 1050Mhz is a neat trick though, unless that's an "average" TDP and the max is actually higher.

Edit: Of course this could always be a complete fake.

Alereon fucked around with this message at 00:42 on Sep 17, 2012

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Space Racist posted:

I remember 5 or 6 years ago there used to be actual dedicated PhysX cards. Not that anyone actually bought one, but are those still useful at all, or are they entirely obsolete by now compared to previous-gen spare Geforces?
They only work with old versions of the PhysX library. Modern games require current versions and would only run in software or on a GPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
1050 isn't divisble by any power-of-two other than 2, which is probably related to the issue. I'd guess that for optimization reasons they used shaders that are expecting the resolution to be divisible by 16, and then failed to render a bit outside of the viewport to prevent graphical anomalies. You can run into similar issues with video, which MUST have a resolution that is divisible by 16 (because macroblocks are 16x16 pixels, ignoring obsolete formats). This is why 1080p video is actually 1088 pixels tall, you can see a colored band at the bottom of the picture sometimes if your video decoder/player doesn't know to crap that off.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
A note for anyone using Firefox with AMD videocards: the Catalyst 12.8 drivers have been blocklisted due to stability issues. Use the Catalyst 12.9 beta drivers instead.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
The GTX 650 Ti really should be positioned against the HD 7770, if nVidia isn't pricing the card appropriately that is a bit ridiculous.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

movax posted:

That's pretty slick, not to mention clever and relatively cheap. You get the netlist for the ARM core from ARM, free to implement it on your chosen process, and make whatever changes you deem fit. Since it's ARM, I could see them leveraging AXI to create an interconnect between the ARM cores and their logic, or their own high-performance bus.
The really interesting thing to me is that there actually are no ARM cores here. Project Denver is an implementation of Transmeta's Code Morphing technology to execute ARM code on a custom-designed nVidia core. The original plan was to execute both x86 and ARM on the same cores, but Intel successfully sued to block this x86 compatibility, arguing that the x86 license didn't transfer to nVidia when they acquired the corpse of Transmeta.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech's Geforce GTX 650 Ti review is out. Leaks were correct: If you don't buy the Radeon HD 7850 (preferably 2GB) instead you are an idiot. If nVidia cuts $20 though it becomes a reasonable step up from the Radeon HD 7770, especially when overclocked. Interestingly enough, all tested cards overclocked to precisely identical settings.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Animal posted:

I am looking at Geforce 670's on Amazon. There is a $30 difference between the regular EVGA 670 and the FTW model, and a bigger jump to Superclocked. Does the factory overclock make a big difference on these cards?

1440p, Skyrim, BF3 etc
Don't get the FTW model, it has stacked power connectors that will break compatibility with many aftermarket coolers. Factory-overclocked Kepler cards have also been problematic due to the difficulty of testing for stability with boost and TDP cap operating. The only reason to get a non-stock card is if you're buying it for the cooler or improved power delivery components for high overclocking.

To make sure you're looking at the best prices, Newegg has the EVGA Geforce GTX 670 for $379.99-$20 MIR=$359.99 with free shipping.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Set a framerate cap to 60fps, that doesn't fix the problem but will significantly reduce the amount of tearing you see. From some Googling there's no way this will work until MST hubs are available and you connect all three monitors via DisplayPort.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
What videocards are you using?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
It would also come in handy for supersampling or if you want to run custom high-res texture packs (such as Skyrim), but that's rather uncommon right now.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Jan posted:

I'm pretty certain VRAM uses a completely separate addressing space. Direct3D resources work in terms of "buffers" instead of pointers, and said buffers wrap up all the memory mapping information for GPU resources.
I'm pretty sure VRAM and system RAM do use the same address space, that's why 32-bit systems can only address 4GB-VRAM-all other hardware reservations worth of system RAM. This isn't relevant for the case of a 32-bit app running on a 64-bit system because Skyrim doesn't care about the VRAM, only the video driver does, and that's a 64-bit application.

Animal posted:

I have it set exactly like them. High Quality, AAA, 16AF. I turned off DoF and that seems to put the results in parity, higher scores than them which is about right considering they are 1600p and I am 1440p.
Do make sure you have it running in DX11 mode, accidentally falling back to an older DX mode will significantly impair performance. Another Metro 2033 benchmarking pro-tip: There's significant run-to-run variability so you need to use multiple runs and average them. Also make sure you have your fan speed turned high enough to get meaningful results, if the videocard breaks 69C it throttles so you have to throw out the results for that run.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Factory Factory posted:

I was hesitant to post because I'm not sure either, but I think that the graphics card maps only a portion of its RAM to the system's address space. Recall that older AGP cards had configurable Apertures to define the amount that data chunks could flow (which was subject to a lot of enthusiast nonsense, since it turned out that aperture sizes didn't really affect performance). Googling around suggests that's still the case. For example, this TechNet blog describing the addressing limit verifies that, on a 4GB RAM 32-bit system with 2.2 GB accessible, his two 1GB GeForce 280s using 256 MB of address space each. Much of the rest was apparently either reserved for dynamic mapping or over-conservative reservation.
I could be wrong, but I think that Technet blog contains a typo. He says 2.2GB remaining, but later in the text he refers to the "over 2GB hole", which indicates to me he meant 1.8GB/4.0GB remaining. This matches up with 4GB-2x1GB-some more hardware reservations. Unless I'm misreading or misunderstanding, his screenshots also show a ~2.2GB hole. I'm not going to pretend I was able to understand this, but is there maybe a difference between the "right" way to do things and the way it gets done in practice? Or is that what you're saying? I've never seen a system with 32-bit OS and a discrete videocard have more available RAM than what would be expected from 4GB-VRAM-other hardware reservations, and it seems like if it was possible to do without a hell of a lot of development they would have for the competitive advantage.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Factory Factory posted:

Oh boy oh boy, I'm not even reading the intro paragraph before posting:

AnandTech: Intel HD 4000 from DDR3-1333 to DDR3-2400.
The big surprise to me is how little difference there is, I guess HD 4000 still doesn't have the throughput for memory bandwidth to matter. It's also very interesting how significant the real-world performance improvements are for things like copying files over USB from moving from DDR3-1333 to 1600. It really supports the conventional wisdom that EVERYONE should be using DDR3-1600, with enthusiasts potentially benefiting from DDR3-1866 (if they're not taking the money from something else).

More directly on-topic for this thread: EVGA has released their PrecisionX 3.0.4 software, the interesting new feature is what they're calling K-Boost:

EVGA posted:

This feature will allow you to “lock” GTX 600 series cards to Boost Clock/Voltage, even in 2D mode. Some important notes about this new feature:
-If using SLI, please disable SLI prior to enabling this feature, you can re-enable SLI once system is rebooted.
-Please disable EVGA K-Boost before reinstalling, or uninstalling the NVIDIA driver.
I haven't done any testing yet but it seems like this could come in pretty handy for overclock testing.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

TheRevolution1 posted:

How would I even tell if it was the motherboard or PSU? The motherboard has been the same gigabyte z68 the whole time. The PSU is a 650w XFX.
Post your own thread in the Haus of Tech Support, use the template in the sticky Rules thread and include the exact model of your motherboard and power supply. Gigabyte motherboards are notorious for their poor power delivery quality, though that usually manifests as hangs, restarts, or power-offs.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Factory Factory posted:

Quadros are professional cards, outfitted with ECC VRAM (and more of it than GeForces, to support GPGPU calculations).
Pedantic note: My understanding is that ECC is not supported on current-gen Quadros because hardware support was removed in the Kepler GPU. Additionally, I don't think the older Quadros used ECC RAM, pe se. Rather, they did ECC calculations on the GPU, meaning available memory and memory bandwidth were reduced to make room for the ECC data. So, for example, if your card had 3GB of RAM and you enabled ECC, you'd then have 2457MB remaining. Another complicating factor is that GDDR5 also supports LINK ECC, which is implemented on nearly all cards (I seem to recall the first Geforces to support it didn't make use of it, but everything else does). This is why, when overclocking graphics memory, you'll see performance start to go back down right before you encounter errors if you overclock too high.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Furmark doesn't work well for stress-testing GTX 600-series cards because they spend all of their time at the TDP cap, well below max clock speeds. So far I've had the best luck with looping the Metro 2033 benchmark, but there may be better options. Try to keep the card between 65C to 69C for maximum boost clocks. First thing you should do is max out the TDP slider and then go from there.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Linux Nazi posted:

So I'm currently running 2x 570s in SLI pushing a 2560x1600 display. Is there a single card that I can replace them with and get same or better performance?
No, only dual-GPU cards like the Geforce GTX 690 and Radeon HD 7990.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has a quick WiiU teardown and hardware analysis. There's a POWER7 CPU (I guess that puts paid to the idea that POWER7 isn't power-efficient enough for consoles), Radeon HD 4850 GPU, some eDRAM, and 2GB of single-channel DDR3-1600. Right now these are all discrete 40nm-class dies, though I'd expect to see them share a die when 28nm fab capacity is available. It's interesting how precisely matches expectations, making me think that the Xbox 720 will also be the expected POWER7+AMD GPU. It'a also interesting how little memory bandwidth the WiiU has, the eDRAM will make up for that somewhat, but I think this reflects a design choice to not support pushing around a lot of texture data.

I think we'll see the gaming capabilities of the WiiU surpassed on very short order by tablets and smartphones. Current generation tablets have as much memory bandwidth (not counting the eDRAM), and the iPad 4 has an astounding number of GPU shader cores for a mobile device (and a much higher resolution display than the WiiU will ever drive). While the raw performance still lags significantly behind the 360 and PS3, mobile devices will only get more flexible and efficient. And they do it in 1% of the power.

It's also funny to point just how far behind smartphones the WiiU is in browsing performance. While that's largely due the older WebKit code, it shows that efficient use of limited horsepower will always beat throwing hardware inefficiently at a problem.

Alereon fucked around with this message at 01:41 on Nov 20, 2012

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

PC LOAD LETTER posted:

Searching for "POWER7" turns up nothing in that article though? He does mention the CPU is PowerPC based (so is the Wii's CPU, based on the 750CL apparently) but the 2 aren't the same thing even though they're associated. There is also some new info. that has come to light. If what some of the people in the B3D thread are saying is true than the WiiU's CPU is just a updated version of the Wii's CPU, and not a very good one at that. Its apparently quite a bit slower than the X360's CPU so the WiiU probably won't be able to run some ports from current consoles.
My bad, I didn't notice that IBM retracted their confirmation that it was POWER7 (they had previously confirmed it used the same POWER7 architecture as the CPUs in the Power 750 Express server). It'll be interesting to see what it actually is, while merely doubling the Wii CPU's clockspeed would be a uniquely Nintendo solution, that seems monumentally idiotic even for them. I thought even Nintendo was onboard with the idea that having such weak hardware limited the utility of the console and thus revenue over the life of the product.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Guni posted:

driver stuff
Try the Catalyst 12.11 Beta 11 drivers, they specifically call out improvements in Sleeping Dogs.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Guni posted:

Thanks! Updating them now, what is 'CAP2'? I notice that to make Far Cry 3 better you must install it and I wonder (though I don't play it, at least yet) should I also install it?
Catalyst Application Profiles package, yes install that as well.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

McCoy Pauley posted:

Anyone have experience with buying a video card from Amazon warehouse? They currently have some EVGA cards I've been looking at on Newegg for much better prices, and generally other stuff I've gotten from the Warehouse has been as good as new. I'd assume that the EVGA warranty would apply exactly the same as if I bought a card off Newegg or regular Amazon. Right?
No, you're buying used product just like from anywhere else. In the case of EVGA, you get the basic RMA-only warranty, not the one that applies to products purchased new.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Local Resident posted:

Why? Hasn't stuff like "FPS v latency" for games and "48fps v 24fps" for film been known/researched for years?
I don't think any of this is new or unknown, it just hasn't been something encountered by most people because the technology wasn't available at a low cost. Motion blur in games is a challenge because any real motion blur adds at least one frame of latency, which generally isn't acceptable. You can fake it well for motion of the camera, like how Valve does it in Source engine games, but this doesn't work for moving objects.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I would guess (genuine guess, could be totally wrong) the determiner of what systems are affected is going to be the motherboard, not card manufacturer.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

MeramJert posted:

Does anyone know if the Intel HD Graphics 4000 supports multichannel LPCM output over HDMI? If I want this feature, will I need to buy a different graphics card?
Anandtech says this has been supported across all Intel graphics products since 2006, which is cool.

Edit: You are talking about streaming, e.g. movies right? I don't know if you can play a game in 7.1 for example.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Scalding Coffee posted:

I was looking at something for a friend and this got my attention. Can someone tell what this is and why I see window blinds? Is this the future of card design?
This image is a bit clearer. That's the heatsink that sits under the plastic shroud you normally see and is what all cards look like. You can see the copper heat pipes running into the radiator at the top, the other ends meet up at the GPU to conduct the heat away. A blower fan sits behind the heatsink at the rear of the card and blows through it, exhausting heat out the back of the case.

Edit: Beaten!

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

M_S_C posted:

Has there been much talk about the Radeon 7870 LE? It's lovely that AMD would release this thing right after holiday season when everyone's already done buying their crap (my 7870 Ghz weeps gently in the corner). It's priced at the same point as the vanilla 7870 GHz edition but performs quite a bit better. AMD really should have just called this thing the 7890 or the 7930.
This is what should have been called the Radeon HD 7930, a second shader cluster and 1/3 of the memory channels disabled. This is probably intended to help them clear inventory, they just delayed the Radeon HD 8000-series from late March to Q2 to clear out 7000-series inventory. Looks like a decent card if the pricing is right.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
There's basically no way the 7870 LE is a custom part, they're just trying to find a way to sell GPUs that had one too many defects to make it as a 7950. The PS4's GPU is going to be a very customized part because they eventually want to integrate it with the CPU, if it isn't at launch.

Alereon fucked around with this message at 00:00 on Jan 6, 2013

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Vsync is required for Alternate Frame Rendering mode, if I remember correctly (that may have been changed). That's the most efficient multi-GPU rendering mode (though it maximizes micro-stutter), I always had Vsync forced-on globally when I ran Crossfire. I found it illuminating to have GPU-Z running to show me how loaded each GPU was, you can often find tweaks to improve Crossfire scaling a bit.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
News is spreading that VGLeaks has posted what they claim to be final Xbox 720 specs, featuring a CPU with eight 1.6Ghz Jaguar cores (the descendent of the Bobcat cores used in the E-series low-power APUs) and Radeon HD 8770 graphics. I'm rather skeptical of this because it seems like giving up on per-thread CPU performance and relying totally on many slow cores is a proven-wrong approach, but we shall see. Similar rumors are spreading about the Playstation 4, including that that is a fully-integrated APU based on the Radeon HD 7870. A 7870 wins a matchup against an 8770, but by how much will depend on clockspeeds, power, and efficiency. The 7870 has 67% more shaders and up to twice the memory bandwidth, but we'll have to see what the actual deployed configuration is.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Digital Foundry also says their "trusted sources" confirm the PS4 and Xbox 720 are Jaguar-based, but I have no idea how legit they are. If it's true, they better have implemented amazing Turbo or I can't see this going well.

HalloKitty posted:

I'd be skeptical too. 8 weak cores? This sounds like the exact opposite of what you'd want in a games console. How will backwards compatibility be handled? .. and so on.
Have there ever been any statements confirming backwards compatibility? My assumption would be that they would port most recent/popular games to the new platforms then just give you a free copy if you owned it before, not that they would try to make them play games from the last generation, but I could be wron.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

teh z0rg posted:

Where are the GTX780s? Where are they?
The Geforce Titan is coming in March for $899 and is based around the GK110 GPU that powers the Tesla K20-series. We don't really know anything about cards beyond that, as far as I know.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

uhhhhahhhhohahhh posted:

Wish they'd just start putting 1 or 2 120mm fans and then enclose the whole thing in a shroud like the reference coolers are so it exhausts out the back.
Unfortunately normal fans don't have the pressure to push decent airflow through a configuration like that, you're pretty much restricted to using a blower-type design or an open-air cooler. Really though it's not too hard to exhaust hot air out of a case so open-air is a better option.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Skilleddk posted:

Does anyone have an opinion of the Windforce 3x coolers, compared to "normal" ones on GTX 680? It's just 20$ more, thinking of getting that
Don't get the Windforce 5X ones though, using a bunch of tiny fans makes for a bad noise profile and reliability.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has received confirmation from AMD that the initial Radeon HD 8000-series launch will not replace their high-end desktop parts, which is why those charts showed their 7800 and 7900-series as stable through 2013. They DO plan to release a new GPU architecture by the end of the year to replace these products in the desktop. In the near-term, AMD will releasing new Radeon HD 7000-series parts based on the Oland GPU, which is a reconfiguration of existing GCN designs.

Overall, no new high-end videocards before the end of the year, but some low-end parts, especially mobile and older rebrands, will get slightly freshened.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
If the application supports CUDA for that card (it may only be on GTX 400+ for example, newer cards support newer versions of the CUDA API) then yes, put it in and install the latest drivers from the nVidia site.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
In general QuickSync is the fastest but not quite as good quality as most software encoders, x264 with OpenCL enabled is the best quality and second in performance, and everything else falls in various places on the range. I have no idea if you can use external codecs with that Sony software though.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Yaos posted:

Nvidia's current promotion sucks, instead of 3 free games you get $50 currency each for Hawken, Planetside 2, and Worlf of Tanks. A Far Cry from the previous promotion.
On the plus side if you can't turn that into at least $50-75 cash (I don't know if Planetside 2 is actually worth anything but people definitely buy Hawken and WoT gold).

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Do keep in mind that Furmark won't stress the card very much (it draws so much power the card will spend all of its time throttled at the TDP cap), I found much more success stress testing by running loops of Metro 2033.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply