|
I was doing some back-of-the-envelope math to figure out the core configuration of the supposed 8870 using this table on Wikipedia, and I'm getting roughly double a 7850: 2048 shaders, 128 texture units, but still 32 ROPs. A 256-bit memory bus using 6Ghz GDDR5, just like the GTX 680. The FLOP numbers that page gives seem about 9% low by my math, but I guess that's not that bad. The transistor count is down 1B from Tahiti, though I suppose they could have shaved that much off by the reductions in memory controller count, caches, and using simpler shader clusters with less DP throughput. Cramming that into 160W at 1050Mhz is a neat trick though, unless that's an "average" TDP and the max is actually higher. Edit: Of course this could always be a complete fake. Alereon fucked around with this message at 00:42 on Sep 17, 2012 |
# ¿ Sep 17, 2012 00:00 |
|
|
# ¿ Apr 25, 2024 18:50 |
|
Space Racist posted:I remember 5 or 6 years ago there used to be actual dedicated PhysX cards. Not that anyone actually bought one, but are those still useful at all, or are they entirely obsolete by now compared to previous-gen spare Geforces?
|
# ¿ Sep 20, 2012 03:35 |
|
1050 isn't divisble by any power-of-two other than 2, which is probably related to the issue. I'd guess that for optimization reasons they used shaders that are expecting the resolution to be divisible by 16, and then failed to render a bit outside of the viewport to prevent graphical anomalies. You can run into similar issues with video, which MUST have a resolution that is divisible by 16 (because macroblocks are 16x16 pixels, ignoring obsolete formats). This is why 1080p video is actually 1088 pixels tall, you can see a colored band at the bottom of the picture sometimes if your video decoder/player doesn't know to crap that off.
|
# ¿ Sep 23, 2012 20:11 |
|
A note for anyone using Firefox with AMD videocards: the Catalyst 12.8 drivers have been blocklisted due to stability issues. Use the Catalyst 12.9 beta drivers instead.
|
# ¿ Oct 4, 2012 19:25 |
|
The GTX 650 Ti really should be positioned against the HD 7770, if nVidia isn't pricing the card appropriately that is a bit ridiculous.
|
# ¿ Oct 8, 2012 13:50 |
|
movax posted:That's pretty slick, not to mention clever and relatively cheap. You get the netlist for the ARM core from ARM, free to implement it on your chosen process, and make whatever changes you deem fit. Since it's ARM, I could see them leveraging AXI to create an interconnect between the ARM cores and their logic, or their own high-performance bus.
|
# ¿ Oct 9, 2012 01:02 |
|
Anandtech's Geforce GTX 650 Ti review is out. Leaks were correct: If you don't buy the Radeon HD 7850 (preferably 2GB) instead you are an idiot. If nVidia cuts $20 though it becomes a reasonable step up from the Radeon HD 7770, especially when overclocked. Interestingly enough, all tested cards overclocked to precisely identical settings.
|
# ¿ Oct 10, 2012 04:05 |
|
Animal posted:I am looking at Geforce 670's on Amazon. There is a $30 difference between the regular EVGA 670 and the FTW model, and a bigger jump to Superclocked. Does the factory overclock make a big difference on these cards? To make sure you're looking at the best prices, Newegg has the EVGA Geforce GTX 670 for $379.99-$20 MIR=$359.99 with free shipping.
|
# ¿ Oct 14, 2012 03:56 |
|
Set a framerate cap to 60fps, that doesn't fix the problem but will significantly reduce the amount of tearing you see. From some Googling there's no way this will work until MST hubs are available and you connect all three monitors via DisplayPort.
|
# ¿ Oct 14, 2012 18:12 |
|
What videocards are you using?
|
# ¿ Oct 14, 2012 18:23 |
|
It would also come in handy for supersampling or if you want to run custom high-res texture packs (such as Skyrim), but that's rather uncommon right now.
|
# ¿ Oct 16, 2012 18:52 |
|
Jan posted:I'm pretty certain VRAM uses a completely separate addressing space. Direct3D resources work in terms of "buffers" instead of pointers, and said buffers wrap up all the memory mapping information for GPU resources. Animal posted:I have it set exactly like them. High Quality, AAA, 16AF. I turned off DoF and that seems to put the results in parity, higher scores than them which is about right considering they are 1600p and I am 1440p.
|
# ¿ Oct 17, 2012 19:58 |
|
Factory Factory posted:I was hesitant to post because I'm not sure either, but I think that the graphics card maps only a portion of its RAM to the system's address space. Recall that older AGP cards had configurable Apertures to define the amount that data chunks could flow (which was subject to a lot of enthusiast nonsense, since it turned out that aperture sizes didn't really affect performance). Googling around suggests that's still the case. For example, this TechNet blog describing the addressing limit verifies that, on a 4GB RAM 32-bit system with 2.2 GB accessible, his two 1GB GeForce 280s using 256 MB of address space each. Much of the rest was apparently either reserved for dynamic mapping or over-conservative reservation. movax posted:stuff
|
# ¿ Oct 18, 2012 02:06 |
|
Factory Factory posted:Oh boy oh boy, I'm not even reading the intro paragraph before posting: More directly on-topic for this thread: EVGA has released their PrecisionX 3.0.4 software, the interesting new feature is what they're calling K-Boost: EVGA posted:This feature will allow you to “lock” GTX 600 series cards to Boost Clock/Voltage, even in 2D mode. Some important notes about this new feature:
|
# ¿ Oct 20, 2012 18:01 |
|
TheRevolution1 posted:How would I even tell if it was the motherboard or PSU? The motherboard has been the same gigabyte z68 the whole time. The PSU is a 650w XFX.
|
# ¿ Oct 21, 2012 17:56 |
|
Factory Factory posted:Quadros are professional cards, outfitted with ECC VRAM (and more of it than GeForces, to support GPGPU calculations).
|
# ¿ Oct 24, 2012 19:27 |
|
Furmark doesn't work well for stress-testing GTX 600-series cards because they spend all of their time at the TDP cap, well below max clock speeds. So far I've had the best luck with looping the Metro 2033 benchmark, but there may be better options. Try to keep the card between 65C to 69C for maximum boost clocks. First thing you should do is max out the TDP slider and then go from there.
|
# ¿ Oct 30, 2012 08:15 |
|
Linux Nazi posted:So I'm currently running 2x 570s in SLI pushing a 2560x1600 display. Is there a single card that I can replace them with and get same or better performance?
|
# ¿ Nov 1, 2012 23:43 |
|
Anandtech has a quick WiiU teardown and hardware analysis. There's a POWER7 CPU (I guess that puts paid to the idea that POWER7 isn't power-efficient enough for consoles), Radeon HD 4850 GPU, some eDRAM, and 2GB of single-channel DDR3-1600. Right now these are all discrete 40nm-class dies, though I'd expect to see them share a die when 28nm fab capacity is available. It's interesting how precisely matches expectations, making me think that the Xbox 720 will also be the expected POWER7+AMD GPU. It'a also interesting how little memory bandwidth the WiiU has, the eDRAM will make up for that somewhat, but I think this reflects a design choice to not support pushing around a lot of texture data. I think we'll see the gaming capabilities of the WiiU surpassed on very short order by tablets and smartphones. Current generation tablets have as much memory bandwidth (not counting the eDRAM), and the iPad 4 has an astounding number of GPU shader cores for a mobile device (and a much higher resolution display than the WiiU will ever drive). While the raw performance still lags significantly behind the 360 and PS3, mobile devices will only get more flexible and efficient. And they do it in 1% of the power. It's also funny to point just how far behind smartphones the WiiU is in browsing performance. While that's largely due the older WebKit code, it shows that efficient use of limited horsepower will always beat throwing hardware inefficiently at a problem. Alereon fucked around with this message at 01:41 on Nov 20, 2012 |
# ¿ Nov 20, 2012 01:22 |
|
PC LOAD LETTER posted:Searching for "POWER7" turns up nothing in that article though? He does mention the CPU is PowerPC based (so is the Wii's CPU, based on the 750CL apparently) but the 2 aren't the same thing even though they're associated. There is also some new info. that has come to light. If what some of the people in the B3D thread are saying is true than the WiiU's CPU is just a updated version of the Wii's CPU, and not a very good one at that. Its apparently quite a bit slower than the X360's CPU so the WiiU probably won't be able to run some ports from current consoles.
|
# ¿ Nov 27, 2012 01:09 |
|
Guni posted:driver stuff
|
# ¿ Dec 9, 2012 14:47 |
|
Guni posted:Thanks! Updating them now, what is 'CAP2'? I notice that to make Far Cry 3 better you must install it and I wonder (though I don't play it, at least yet) should I also install it?
|
# ¿ Dec 10, 2012 03:02 |
|
McCoy Pauley posted:Anyone have experience with buying a video card from Amazon warehouse? They currently have some EVGA cards I've been looking at on Newegg for much better prices, and generally other stuff I've gotten from the Warehouse has been as good as new. I'd assume that the EVGA warranty would apply exactly the same as if I bought a card off Newegg or regular Amazon. Right?
|
# ¿ Dec 12, 2012 03:46 |
|
Local Resident posted:Why? Hasn't stuff like "FPS v latency" for games and "48fps v 24fps" for film been known/researched for years?
|
# ¿ Dec 16, 2012 18:10 |
|
I would guess (genuine guess, could be totally wrong) the determiner of what systems are affected is going to be the motherboard, not card manufacturer.
|
# ¿ Dec 17, 2012 21:12 |
|
MeramJert posted:Does anyone know if the Intel HD Graphics 4000 supports multichannel LPCM output over HDMI? If I want this feature, will I need to buy a different graphics card? Edit: You are talking about streaming, e.g. movies right? I don't know if you can play a game in 7.1 for example.
|
# ¿ Dec 19, 2012 04:59 |
|
Scalding Coffee posted:I was looking at something for a friend and this got my attention. Can someone tell what this is and why I see window blinds? Is this the future of card design? Edit: Beaten!
|
# ¿ Dec 24, 2012 23:27 |
|
M_S_C posted:Has there been much talk about the Radeon 7870 LE? It's lovely that AMD would release this thing right after holiday season when everyone's already done buying their crap (my 7870 Ghz weeps gently in the corner). It's priced at the same point as the vanilla 7870 GHz edition but performs quite a bit better. AMD really should have just called this thing the 7890 or the 7930.
|
# ¿ Jan 5, 2013 18:13 |
|
There's basically no way the 7870 LE is a custom part, they're just trying to find a way to sell GPUs that had one too many defects to make it as a 7950. The PS4's GPU is going to be a very customized part because they eventually want to integrate it with the CPU, if it isn't at launch.
Alereon fucked around with this message at 00:00 on Jan 6, 2013 |
# ¿ Jan 5, 2013 23:20 |
|
Vsync is required for Alternate Frame Rendering mode, if I remember correctly (that may have been changed). That's the most efficient multi-GPU rendering mode (though it maximizes micro-stutter), I always had Vsync forced-on globally when I ran Crossfire. I found it illuminating to have GPU-Z running to show me how loaded each GPU was, you can often find tweaks to improve Crossfire scaling a bit.
|
# ¿ Jan 19, 2013 21:47 |
|
News is spreading that VGLeaks has posted what they claim to be final Xbox 720 specs, featuring a CPU with eight 1.6Ghz Jaguar cores (the descendent of the Bobcat cores used in the E-series low-power APUs) and Radeon HD 8770 graphics. I'm rather skeptical of this because it seems like giving up on per-thread CPU performance and relying totally on many slow cores is a proven-wrong approach, but we shall see. Similar rumors are spreading about the Playstation 4, including that that is a fully-integrated APU based on the Radeon HD 7870. A 7870 wins a matchup against an 8770, but by how much will depend on clockspeeds, power, and efficiency. The 7870 has 67% more shaders and up to twice the memory bandwidth, but we'll have to see what the actual deployed configuration is.
|
# ¿ Jan 22, 2013 00:35 |
|
Digital Foundry also says their "trusted sources" confirm the PS4 and Xbox 720 are Jaguar-based, but I have no idea how legit they are. If it's true, they better have implemented amazing Turbo or I can't see this going well.HalloKitty posted:I'd be skeptical too. 8 weak cores? This sounds like the exact opposite of what you'd want in a games console. How will backwards compatibility be handled? .. and so on.
|
# ¿ Jan 22, 2013 00:59 |
|
teh z0rg posted:Where are the GTX780s? Where are they?
|
# ¿ Jan 27, 2013 15:14 |
|
uhhhhahhhhohahhh posted:Wish they'd just start putting 1 or 2 120mm fans and then enclose the whole thing in a shroud like the reference coolers are so it exhausts out the back.
|
# ¿ Feb 3, 2013 21:29 |
|
Skilleddk posted:Does anyone have an opinion of the Windforce 3x coolers, compared to "normal" ones on GTX 680? It's just 20$ more, thinking of getting that
|
# ¿ Feb 12, 2013 19:56 |
|
Anandtech has received confirmation from AMD that the initial Radeon HD 8000-series launch will not replace their high-end desktop parts, which is why those charts showed their 7800 and 7900-series as stable through 2013. They DO plan to release a new GPU architecture by the end of the year to replace these products in the desktop. In the near-term, AMD will releasing new Radeon HD 7000-series parts based on the Oland GPU, which is a reconfiguration of existing GCN designs. Overall, no new high-end videocards before the end of the year, but some low-end parts, especially mobile and older rebrands, will get slightly freshened.
|
# ¿ Feb 16, 2013 02:28 |
|
If the application supports CUDA for that card (it may only be on GTX 400+ for example, newer cards support newer versions of the CUDA API) then yes, put it in and install the latest drivers from the nVidia site.
|
# ¿ Feb 21, 2013 19:17 |
|
In general QuickSync is the fastest but not quite as good quality as most software encoders, x264 with OpenCL enabled is the best quality and second in performance, and everything else falls in various places on the range. I have no idea if you can use external codecs with that Sony software though.
|
# ¿ Feb 21, 2013 22:12 |
|
Yaos posted:Nvidia's current promotion sucks, instead of 3 free games you get $50 currency each for Hawken, Planetside 2, and Worlf of Tanks. A Far Cry from the previous promotion.
|
# ¿ Feb 25, 2013 21:34 |
|
|
# ¿ Apr 25, 2024 18:50 |
|
Do keep in mind that Furmark won't stress the card very much (it draws so much power the card will spend all of its time throttled at the TDP cap), I found much more success stress testing by running loops of Metro 2033.
|
# ¿ Feb 26, 2013 20:08 |