|
Agreed posted:... Factory Factory ... Trade you for an E350 HTPC with a CableCard tuner.
|
# ¿ May 31, 2012 17:35 |
|
|
# ¿ Mar 29, 2024 15:50 |
|
AMD is moving its Catalyst drivers off of monthly and to Nvidia's "as needed/when it's ready" schedule..AnandTech posted:Why? As we briefly mentioned before, the benefits of AMD’s monthly driver release schedule were realized in practice for quite some time, but only up until the last couple of years. Starting around the release of the HD 5000 series though those benefits became harder to realize and less meaningful. The issue is at its basic level one of complexity – AMD’s drivers continue to grow in size and complexity, with the latest release weighing in at 170MB. Video drivers have effectively eclipsed Windows 95 in complexity and size in the last couple of years, which reflects the fact that these devices really have become processors, complete with all the intricacies of code compilation and scheduling, and basically in need of a mini-OS just to run them.
|
# ¿ Jun 1, 2012 16:30 |
|
Silly question: don't PCIe 2.1 and 3.0 slots deliver up to 150W power on their own? So such a card installed on a PCIe 3.0 motherboard would still be able to run without the aux power hooked up?
|
# ¿ Jun 2, 2012 00:50 |
|
Afox has a 6850 that does so (Afox being a Foxconn spinoff and newcomer to the US market). That card is also single-slot.
|
# ¿ Jun 2, 2012 00:59 |
|
Two neat AnandTech articles today: First, it followed up on its Thunderbolt article, full of Asus engineering slides that us mere mortals can only begin to understand. PCIe hot plug is working, though devices without supported drivers might only half work or BSOD a system entirely. Link. As well, they looked into how TDP limits affect the performance of Ultrabooks as gaming platforms by comparing HD 4000 at 17W and 45W. (big dips are the scene transitions in the benchmark) As nice as the dream of ULV good-enough gaming graphics is (in IVB or Trinity), the TDP limit leads to significant performance deficit compared to similar 35W chips. The CPU and GPU both can't really wind up enough in 17W to handle a game. And thanks to AnandTech finally discovering HWiNFO64, we've got a rough TDP for a full-performance HD 4000: 15W. I'd say we'll need another process shrink for ULV CPUs/IGPs/APUs to give a solid gaming experience.
|
# ¿ Jun 3, 2012 22:24 |
|
BLOWTAKKKS posted:How quiet is the ASUS 670? Would two of them be quieter than a single 690? I really wouldn't want to deal with aftermarket GPU coolers after my last experience. I usually go EVGA, but they all same to have the same loud single fan setup. Asus's DCII 670 is quieter under load than a 690 is at idle. Also, I can't help but ask why you're considering that much oomph.
|
# ¿ Jun 4, 2012 14:48 |
|
Lblitzer posted:This just reminds me of reddit's gamingpc reddit where you can't offer suggestions on saving money because it's all about spending as much as possible for e-cred. This is one of my favorite finds recently: http://www.reddit.com/r/buildapc/comments/uip63/build_ready_3000_gaming_pc_best_bang_for_my_buck/ At least they have enough brains to poo poo on OCZ gear, too. I feel bad for the guys trying to give good advice there, drowned out in a sea of "You really should get a 1300W PSU instead of that 850W thing." Though the build with the i5-3570K looked almost reasonable until you got to the dual GeForce 690s.
|
# ¿ Jun 4, 2012 15:00 |
|
Are you sure that's not TDP-based? Or have people tested cards with different coolers and it always happens at 70 C?
|
# ¿ Jun 4, 2012 17:21 |
|
So wait, you currently have SLI 570s, you're seeing other people getting better framerates on lesser video cards, and so your solution is to upgrade your video card? Look for bottlenecks elsewhere.
|
# ¿ Jun 4, 2012 20:30 |
|
Here's the skinny on GK110. 7.1 billion transistors, 15 compute-optimized SMXs, 2,880 CUDA cores, 288 GB/s of memory bandwidth. But it still looks like it's optimized for real-time graphics...
|
# ¿ Jun 6, 2012 08:11 |
|
U.S. Government to AMD: Your drivers are lovely, be more like Intel. It's about high-value IT security. Intel and Nvidia's drivers all support memory address randomization, which makes it harder to probe for security flaws. AMD's drivers do not, so address randomization cannot be used with AMD hardware.
|
# ¿ Jun 8, 2012 15:23 |
|
We need a 600 pixel wide for that. I love that indirect lighting.
|
# ¿ Jun 8, 2012 18:32 |
|
Factory Factory posted:U.S. Government to AMD: Your drivers are lovely, be more like Intel. Update to this: First, it wasn't the US Government, technically it was a security group at my alma mater. Second, AMD sulked a bit that it only affected users who screwed around with security settings, but will fix the bug.
|
# ¿ Jun 9, 2012 23:35 |
|
Speaking as one who went with 6850s in CF, a single card is always preferable. There are just too many headaches and caveats and such associated with CF and SLI setups. The raw power is impressive, but you pay for it in odd performance problems, more rampant instability, constant driver/application profile updates, and additional noise.
|
# ¿ Jun 12, 2012 01:08 |
|
I was gonna write this whole thing but then I Googled up Hardware Secrets having already done the work. Clicky the linky for an exciting delve into the meanings of such letter salad as AFR, SFR, AFR of SFR, SLI AA, Scissors, Supertiling, and Super AA. It's vintage 2008, though, when CF and SLI were The New Big Thing rather than boring commonplace e-peen inflators.
|
# ¿ Jun 12, 2012 22:13 |
|
Goon project: Update that Wiki page. I'll create a Wiki for it.
|
# ¿ Jun 12, 2012 22:21 |
|
wipeout posted:I had real problems with them. Non-existent. The closest thing is Lucid Virtu/MVP, which can have some similar benefits but is pretty much completely different in how it works.
|
# ¿ Jun 13, 2012 01:21 |
|
660M - About a GeForce GT 640. ~9% slower core clock but ~12% faster RAM clock, no idea how that plays out in benchmarks. 640M - Half to 2/3 a 660M when memory bandwidth limits are involved, I'd say around a GeForce 440/630. K2000M - Slightly slower 650M. Draw your own parallels.
|
# ¿ Jun 13, 2012 02:21 |
|
Grim Up North posted:I'm looking for a entry-level NVIDIA GPU to develop double precision CUDA software on. Now this is not for production, so I don't really need much horsepower (and I have no use for the actual graphics part at all) but don't want to get a card with an out-of-whack price/performance ratio either. However I don't really now where to look for kind of general double precision CUDA benchmarks. CUDA price/performance ratios usually kick in when comparing GeForce 560 Ti-448, 570, or 580 to a Fermi-based Quadro. Anything else and CUDA price/performance is just awful since the CUDA performance is so low. At that point, basically anything works for "Will it run?" testing.
|
# ¿ Jun 13, 2012 18:33 |
|
Pretty much.
|
# ¿ Jun 13, 2012 19:06 |
|
GF110 is so much faster at CUDA than GF114 that the GeForce cards based on GF110 (e.g. GeForce 580) are competitive with GF116/GF106-based Quadros (e.g. Quadro 2000) in DP float, even with the throttling, and smash them at every other workload.
|
# ¿ Jun 14, 2012 01:31 |
|
Assuming your CPU isn't an overclocked i7-920 or Bulldozer chip, yes. The card needs 170W for itself.
|
# ¿ Jun 14, 2012 17:11 |
|
I believe that's Zotac, as in Zotac and Sapphire are owned by the same holding company. But maybe they have Galaxy, too?
|
# ¿ Jun 18, 2012 04:08 |
|
Crossposting this because it's relevant to some of this thread's interests:Factory Factory posted:
|
# ¿ Jun 19, 2012 16:21 |
|
The articles go into detail, but basically to steal some of Nvidia's thunder by offering a highly parallel HPC processor cluster on which you can re-use familiar x86 code and standard Intel systems architecture optimizations. Many desktop computer tasks take good advantage of having a few large processing cores available which are complex and flexible. The chip maker adds complexity by sticking more and more transistors on silicon, and more and more complex tasks can be solved in a single clock cycle. But there are plenty of computing workloads that do not need as much per-core complexity. A lot of statistical modeling, scientific simulation, heavy-duty content creation (like 3D rendering or bulk video transcoding), and graphics rendering tasks don't have a huge range of extremely complex calculations. Rather, they have tons and tons of the same calculations that have to be run over and over on a ton of data. In such workloads, a ton of simple execution cores is much more effective than a few complex cores at getting the work done. There's more to it, but that's the gist. So this Intel Xeon Phi is just a highly parallel processor. It's an add-in to a system with complex regular Xeons the way a GPU would be - computing power optimized for different things. Actually, Xeon Phi is a full system-on-a-board, but Intel isn't selling it that way. It's just that tech has moved on from where it use to be, so the cheap and simple processor core of 2012 is an augmented top-of-the-line model from 1993. Nvidia and ATI/AMD came to HPC computing by a different route, starting with tons of extremely simple execution cores and building up complexity.
|
# ¿ Jun 20, 2012 08:01 |
|
That's certainly a better reply than the 500 pixel I was considering posting. I have eye candy lust, but nowhere near enough to drop big bones on a PhysX assist card. Agreed worked very hard to earn a pass from my raised eyebrow of high-horse scorn.
|
# ¿ Jun 20, 2012 11:19 |
|
eggyolk posted:Sorry, meant low profile. AFOX makes a low-profile Radeon 6850, but I don't think there are any North American distributors yet.
|
# ¿ Jun 22, 2012 19:49 |
|
Rekkit posted:I have a GT 570m on my laptop that kicks in real loud when I'm playing games. What's a good program to monitor the temperature? I tried using HW Monitor but it only shows the CPU temperature. HWiNFO64 or GPU-Z.
|
# ¿ Jun 25, 2012 02:12 |
|
Why not get the 690, keep the 680, and do triple SLI? Meanwhile, I want a pony who rides a goose who lays golden toilets. Also, if you would need a PSU upgrade for a 680 SLI setup, you'd need it for the 690 as well.
|
# ¿ Jun 25, 2012 02:41 |
|
The 560 Ti-448 has 1.25 GB of VRAM, which is sufficient for many games which would just barely exceed 1 GB.
|
# ¿ Jun 25, 2012 09:35 |
|
Unless there's been some major new update in the past couple months, d-mode Virtu is hella bugtastic with QuickSync. And i-mode Virtu doesn't expose any of those sexy driver optimizations properly so you don't get them. You'd be better off hooking up the IGP to another port on your monitor and cheating by setting it to mirror or just disabling the second screen output, in terms of driver/QuickSync compatibility. But the real crux of the matter is that your capture/stream/encoding software needs to be QuickSync-compatible. AFAIK/AFAI-can-Google, all the big players in streaming software are aware of QuickSync and think it's cool, but their software would require a major re-engineering to integrate the Intel Media SDK and so the support just isn't there yet. Ironically, Intel WiDi already does exactly what you want - it encodes the frame buffer with the QuickSync engine for streaming. It just only does so over WiFi to a dedicated receiver and so is completely useless for your purposes. Factory Factory fucked around with this message at 22:35 on Jun 25, 2012 |
# ¿ Jun 25, 2012 22:32 |
|
Yes.
|
# ¿ Jun 26, 2012 05:06 |
|
You didn't look at the sticky threads, did you? Hint: "Diablo 3" does not imply the thread is actually exclusive to Diablo 3.
|
# ¿ Jun 30, 2012 04:41 |
|
Can you even fit a quad-slot cooler like that in the top slot without hitting I/O panel ports, the CPU socket, or the RAM slots on an X79 board?
|
# ¿ Jun 30, 2012 08:32 |
|
Try HWiNFO64, GPU-Z, and also Furmark should have some reporting when you run its benchmark.
|
# ¿ Jul 7, 2012 02:03 |
|
Ervin K posted:I have a question on the relationship between the CPU and GPU. Is there some kind of a performance ratio that we have to maintain between the video card and the processor? What would happen if you paired an old Core 2 Duo processor with a GTX 690? Except for exceptionally CPU-heavy games like StarCraft 2, Civ5, BF3 (more number of cores than core speeds), SWTOR (ditto), or the ARMA series, most CPUs are good enough for most games regardless of GPU. The one exception is when doing SLI or CrossFire setups. SLI/CF require additional CPU power to coordinate compared to a single card setup, so a good-enough CPU might become not good enough when you compare a single card to an SLI/CF pair. That said, an i5-3570K at stock clocks has plenty of power to handle a dual-card setup. For triple or quad SLI/CF, that's when you would be looking at overclocking or an i7. Not that the performance hit would be huge, but if you're spending $1,500 or so on graphics cards, you can afford an extra $100 and/or some tweaking time to get the most out of it. As for the driver thing, . If it really bothers you, uninstall all the drivers, murderize them with Driver Sweeper, and then install a fresh batch of 12.7 Beta.
|
# ¿ Jul 9, 2012 21:15 |
|
What's the problem, exactly? The GPU can take it.
|
# ¿ Jul 13, 2012 00:33 |
|
Heads up for any of you nerds with Nvidia forum accounts: dey got hacked.quote:Our investigation has identified that unauthorized third parties gained access to some user information, including:
|
# ¿ Jul 14, 2012 12:23 |
|
It's not just yet, and it's a standard trick: the "paper launch." Lots of top-end video cards have been getting them this year, and some SSDs as well (like Crucial's mSATA M4 drives).
|
# ¿ Jul 16, 2012 09:31 |
|
|
# ¿ Mar 29, 2024 15:50 |
|
Bad video RAM on the graphics card. Hardware failure, needs to be replaced.
|
# ¿ Jul 19, 2012 00:30 |