|
Hi, I'm Factory Factory, your co-OP for this thread. I was the technical writer to Movax's engineer. That's not a euphemism. E: Quoting this: movax posted:Shadows Factory Factory fucked around with this message at 00:22 on May 11, 2012 |
# ¿ May 11, 2012 00:12 |
|
|
# ¿ Apr 26, 2024 00:55 |
|
Reserved so I have a place for news and content. You may now poo poo up this thread. Factory Factory fucked around with this message at 00:16 on May 11, 2012 |
# ¿ May 11, 2012 00:12 |
|
You don't have a PayPal receipt or copy of the auction page that you can dig up?
|
# ¿ May 11, 2012 11:14 |
|
Practically, it's 1) power savings and 2) CUDA/GPGPU performance. The consumer 600 series will just be worse at CUDA than the 500 series. Otherwise, you can divide the CUDA core count on a 600 series card by 2 and get a pretty good first approximation of how it relates to a 500-series card in GPU-bound scenarios. As for the purchasing questions, not only is there a price/performance chart linked at the top of the OP that will tell you everything you need to know to answer your question. It's also quoted in the post directly above yours. But that said, SWSP said he wants to keep this thread more on info and less on buying assistance, so let's try to keep that kind of question in the system building sticky. Nierbo posted:Yes you're right, I just found it. They come with a two year warranty? Factory Factory fucked around with this message at 12:26 on May 11, 2012 |
# ¿ May 11, 2012 12:21 |
|
HIS cards come with a two year warranty. The confusion was in you possibly miscounting the years.
|
# ¿ May 11, 2012 13:21 |
|
Verizian posted:
Everybody's drivers suck, just at different times. Intel's Windows drivers sucked until a few months ago, now they merely sip. Nvidia has constant problems, like a massive Shogun 2 performance bug for high resolutions and 3D Vision generally being a slightly bigger horror than AMD's stereoscopic 3D. AMD sucks at Rage and is filled with small bugs and performance issues that take them a bit longer than Nvidia to iron out because they don't give game devs a bunch of free hardware in return for marketing and early access for driver optimizations. quote:
With current gen cards, you will not apply better TIM than the manufacturer can, period. Get the cooler you want the first time around if you want TIM perfection. If you have to have aftermarket cooling, Arctic Cooling makes pretty much the only good coolers. quote:
For gaming (Eyefinity and Surround), the monitors must have the same resolution. For non-full-screen multi-monitor, the resolutions may be different; however, if they are, most video cards will clock up to "low 3D" clocks, which raises idle noise and power consumption significantly. You can sum up height * width resolutions for how many megapixels your card is pushing out, but that's only relevant to workload for gaming. Suggestions for full detail/AA are in post 2, after going over model numbers. AnandTech also benchmarks at 1920x1080x3 surround, and according to its numbers, then if you are willing to drop AA and often some detail, you can get 40+ FPS with a GeForce 680. For 3x2560x or such, you will definitely need more horsepower for a high-detail gaming experience. E: Quote != edit
|
# ¿ May 11, 2012 13:35 |
|
AnandTech has a cool article up about using Thunderbolt and Virtu i-Mode together on a Wintel platform. Windows Thunderbolt isn't fully mature just yet (missing hot-add after boot, though it has hot-remove), but basically poo poo just works.
|
# ¿ May 12, 2012 01:44 |
|
hobbesmaster posted:I doubt you'll see hotswapable thunderbolt much. eSATA isn't hotswapable either and desktop PCs can take the removal of hard drives better than PCIe cards. It says right in the article that Intel is requiring hot-swap as part of the certification process, and it's just a matter of drivers. And eSATA is indeed hot-swappable, with the right drivers. You can also fudge it by forcing a "detect hardware" at the Device Manager.
|
# ¿ May 12, 2012 05:32 |
|
There isn't a way to turn it off, currently.
|
# ¿ May 12, 2012 08:49 |
|
Longinus00 posted:Why do you keep mentioning the nationality of the monitor? Why would that make any difference? Everything else that you've said is already covered in the OP. "Korean monitors" is a catchall used in the monitor megathread for imported 27" IPS screens, factory seconds that didn't make it into 27" iMacs. There are at least four brands being imported that have not-very-memorable names. Except for Catleap. Hence they're being referred to by nationality.
|
# ¿ May 12, 2012 23:13 |
|
movax posted:This is awesome, but guess who has a P67 -- Re: quiet GPUs, the biggest offender is definitely blower-style fans. Blowers are very utilitarian and robust - they work even if you cram cards right next to each other in a case with poor ventilation. Open-air coolers (i.e. without the tight-fitting shroud) generally provide lower temperatures and much lower noise characteristics because the fans can be larger, and the heatsinks can have more surface area. However, this comes at the cost of requiring better case ventilation, since air no longer moves over the card unidirectionally. Where a blower sucks air in from the front of the case and spews it out the back, an open-air cooler sprays hot air everywhere locally (though some gets ducted out the back slots). The most quiet, best of the best coolers are triple-slot dealies, open-air designs which take up three expansion slots on the motherboard. These have a ton of room for heatsinks and high-end fans. Many high-end boards will space their CF/SLI-capable slots far enough apart to accept these cards. One particularly quiet card that came up recently in the system building thread is the Asus GeForce 670 Direct CU II TOP. That fucker has a load fan noise of 25 dBA. It's the quietest high-performance video card I've ever seen, and it's quieter than any current blower-based reference designs until you get down to the level of a Radeon 5670. TechPowerup reviews. Factory Factory fucked around with this message at 00:50 on May 13, 2012 |
# ¿ May 13, 2012 00:47 |
|
AnandTech will be doing a Q&A with a GPGPU sexpert, so submit some questions. And by "sexpert," I mean he was a founder of Aegia (of PhysX fame), became one of Nvidia's top CUDA guys, and is now with AMD spearheading their hetereogeneous systems architecture, i.e. the seamless integration of highly-parallel cores (i.e. GPUs) with complex serial cores (i.e. CPUs).
|
# ¿ May 15, 2012 02:29 |
|
Mirror's Edge, too. Cloth, shattering glass, and partpickles. https://www.youtube.com/watch?v=w0xRJt8rcmY
|
# ¿ May 15, 2012 14:49 |
|
Agreed posted:EVGA is -KR'ing the crap out of the 600-series, too. Where's my lifetime warranty, EVGA? Come on! Extended warranties will be things you can buy for any card. I think you can max it out at 10 years. Single retail SKU, simpler.
|
# ¿ May 16, 2012 23:31 |
|
Soul Glo posted:Quick question, would a 1 GB Radeon HD 7570 be okay for games around Diablo 3 caliber? Check it out yourself.
|
# ¿ May 17, 2012 01:43 |
|
At the dev level, sure. But when you're targeting a console with 512 MB of RAM for both game and graphics, it's the thing to do not to change that behavior for other platforms.
|
# ¿ May 17, 2012 03:13 |
|
The 6770 is a VLIW5 part and 6850 is VLIW4. D3 might be one of those rare games that uses that fifth codepath and gets more mileage out of it than just having more SIMD cores. Whatever the hell Special Functions is, maybe it's important?
|
# ¿ May 17, 2012 03:56 |
|
4 Day Weekend posted:I just read the SLI bit in the OP. Just to clarify, they need the same model (ie GF110) only? Memory/clock speed doesn't affect compatibility? BFE's mistaken. SLI require both the same GPU (e.g. GF110), the same number of active SMs/SMXs (i.e. CUDA cores) and ROPs, the same memory bus bitwidth, and the same amount of memory. Examples:
Factory Factory fucked around with this message at 14:50 on May 17, 2012 |
# ¿ May 17, 2012 14:39 |
|
Holy poop on a stick. Nvidia announced Tesla hardware based on GK104 and GK110. Full details aren't out yet, but it looks like the full GK110 die is based on 2880 Kepler CUDA cores (1440 Fermi equivalent) in 15 SMXs, eight times the number of FP64-enabled cores per SMX vs. GK104, triple the L2 cache vs. GK104, 32-thread symmetric multithreading at the card level (Hyper-Q, like hyperthreading on Intel processors), the ability for GPU threads to spawn additional GPU threads without CPU intervention, a 384-bit memory bus, and 7.1 billion loving transisters per chip.
|
# ¿ May 17, 2012 14:51 |
|
It might even run Crysis.
|
# ¿ May 17, 2012 14:59 |
|
Biggest human being Ever posted:It's passively cooled too, looks like a good choice for a HTPC. They're passively cooled as long as they're in a 110 dBA forced-air server enclosure, sure.
|
# ¿ May 17, 2012 15:08 |
|
Pretty sure that was sarcasm. -- In other news, AnandTech will have an article on Nvidia GeForce Grid later today. It's, well, cloud-based gaming, like OnLive. The idea is to run PC games at full details in a server farm and pipe I/O to any device - console, PC, Mac, tablet, etc.
|
# ¿ May 17, 2012 15:23 |
|
Well, Intel's WiDi uses the QuickSync engine to compress the frame buffer to h.264. Ivy Bridge's QuickSync is fast enough to compress 1080p30 and push it over 802.11n. The idea is there and it exists, it's just a matter of figuring it out over lower bandwidth.
|
# ¿ May 17, 2012 16:05 |
|
Raster unit (Render OutPut unit/Raster Operations Pipeline); it's basically the final conversion stage between texture mapping/shaders and final color value for a pixel. This was more important to know when ROPs, texture units, and shaders all came in equal numbers on each GPU, which they no longer do. AFAIK, no GPUs within one family change ROPs unless they change SM(X)/CUDA cores, too, so it's possible to get by without knowing anything about them, really. Edit for your edit: No problems I can think of.
|
# ¿ May 17, 2012 16:58 |
|
DX11.1 isn't going to be a huge release, mostly behind-the-scenes stuff for performance and API integration. It will include stereoscopic 3D support, though, so, hypothetically, every game will get S3D without fiddly vendor-specific implementations.
|
# ¿ May 18, 2012 09:51 |
|
Plus I'm sure we'll get the same stilted and/or recycled motion capture we always do. Why can't more games use Euphoria?
|
# ¿ May 18, 2012 11:30 |
|
N.B. you have to order the GTX 555 version of the X51 to get a 330W power brick, which can juuuuuust handle a non-overclocked GeForce 670, which has a 170W TDP/141W PowerTune target. You might prefer checking out Star War Sex Parrot's posts in the system building thread about the SilverStone Sugo-based mini-ITX box he built; it's a similar volume to a console, though not a similar shape the way an X51 is. -- E: Hey guess what! Nvidia is repackaging the lovely Fermi cards as GeForce 600 series! Again! GeForce GT 610
E2: The GeForce GT 610 costs $60 shipped at Newegg That must be the same price-performance curve as the $110 Radeon 6450 with 2GB of VRAM. Factory Factory fucked around with this message at 04:34 on May 20, 2012 |
# ¿ May 20, 2012 03:45 |
|
Lord, Metro2033. I discovered - hey, not only do we have quote linking, but we've got reply drafts and character counters? - I discovered that the big performance hog to a Radeon 6850 CF setup was depth of field, of all things, and that the game ran very smoothly once I turned that off. I also discovered that my graphics overclock is stable for Furmark but not for Metro. Good lord, that game. Whatever your video card, it will have the poo poo kicked out of it by Metro 2033.
|
# ¿ May 21, 2012 09:09 |
|
DX9: yes DX10: yes DX11 with advanced DoF: yes DX11 without ADoF and decent framerates: nnnnnope.
|
# ¿ May 21, 2012 10:08 |
|
Yeah, well, I'm also two-loops-of-Unigine Heaven stable, too. Frickin' Metro.
|
# ¿ May 22, 2012 00:40 |
|
System building/parts picking thread is ^^^^ thataway.
|
# ¿ May 22, 2012 22:57 |
|
There is none; low-end Nvidia cards are terrible values. And this still isn't the parts-picking thread.
|
# ¿ May 22, 2012 23:19 |
|
A few modern games, too, like LA Noire. The facial animation system is a 30FPS video normal map, basically, so the engine locks in sync with that. Also, this is neither here nor there, but every time I type "LA Noire" I very nearly typo "LA Norse," and I imagine the most wonderful Skyrim mashup.
|
# ¿ May 23, 2012 19:31 |
|
Why not go for full SSAA then? Render internally at 10x resolution and then downscale.
|
# ¿ May 24, 2012 17:17 |
|
2GB of RAM doesn't seem to limit the 680 in SLI when working with 3x1920x1080, so I don't think a single 2560x1440 monitor will provide any VRAM issues whatsoever.
|
# ¿ May 28, 2012 06:34 |
|
The FaH guys don't have a client that will fold on a 680, actually, so all that is moot until there's a software update.
|
# ¿ May 28, 2012 13:34 |
|
They can using Lucid Virtu. Otherwise they cannot, as there's no DP connection from the video card to the Thunderbolt adapter. https://www.youtube.com/watch?v=O1t7Rc9qFgI
|
# ¿ May 28, 2012 23:50 |
|
AnandTech has a dual feature on an industrial PC and the "Cedar Trail" Intel Atom that runs it. Cedar Trail is a rarity among Intel chips - its IGP is not Intel-designed, but rather an IP block from PowerVR, the SGX 545. The GPU is branded as the GMA 3600/3650, depending on clock speed. The PowerVR Series 5 (a.k.a. SGX) is one of those low-power-optimized architectures. It is entirely DirectX 10.1 compliant, but like Intel's HD Graphics GPUs, much is accomplished in efficient fixed-function hardware. It's used extensively in smartphones and tablets, such as the iPhone 4, first iPad, Palm Pre, BlackBerry Playbook, and Samsung Galaxy S. It runs at very low clockspeeds - the SGX 545 is the most powerful unit in Series 5, and its as-designed clockspeed is only 200 MHz. Intel runs the GPU at up to 650 MHz in Cedar Trail, however. Block diagram of an SGX GPU You might think it's odd that a phablet GPU is being put in a netbook platform. Well, this third-gen Atom core is identical to the first-gen Atom core, just clocked higher; it's not much faster than top-end phablet CPUs at this point, so why not give it graphics to match? The Cedarview SoC, including the Cedar Trail CPU core and PowerVR GPU IP block That second block diagram and other material promise a lot from this GPU and its associated hardware for this update to the Atom platform:
Intel has provided the shittiest of drivers for the GMA 3650. The launch drivers had major problems with screen tearing and stuttering... on the Goddamn desktop. The GPU can't handle a Windows desktop, regardless of settings - any resolution, Aero on or off. And the update package for newer drivers hoses your OS install and prevent you from entering Windows at all. You have to flatten and reinstall if you're updating from the launch drivers. Once you have those new drivers installed, things are improved... somewhat. You can now display a blank desktop properly, but if you get saucy and move a window, the system lags to hell. HD video decode? A solid "Almost." 720p YouTube works. 1080p YouTube stutters all over and drops frames. Netflix and Hulu are SD-only. And did I mention? Windows 7 32-bit only. No 64-bit, no other OS, not even Linux. Intel can write better drivers. The phone version of Cedar Trail, Medfield, has fantastic Android support, and HD 4000 is a paragon of functionality compared to GMA 3650. But they have not written better drivers. GMA 36x0's drivers being as they are for a shipping product from Intel in 2012 is just crazy.
|
# ¿ May 29, 2012 06:20 |
|
The chip is targeting industrial appliances and $200 netbooks, here. It's still unacceptable, but context, people, context. It's not like they're replacing their entire product line top to bottom with this stuff and forcing you to buy it. $200 netbooks. That's the price new.
|
# ¿ May 29, 2012 15:50 |
|
|
# ¿ Apr 26, 2024 00:55 |
|
It's DDR SDRAM. 6.008 "GHz" = 3.004 GHz * two transfers per clock. GDDR5 is actually a bit more complex than that, but that's the gist of it. Complex part: GDDR5 has two different clocks: command clock (CK) and write clock (WCK). For hypothetical 6 GHz GDDR5, CK runs at 1.5 GHz (1/4) and WCK runs at 3 GHz. CK and WCK are synchronized, so you can think of GDDR5 as being able to do four I/Os per command. Good for highly parallel, bandwidth-intensive workloads like graphics!
|
# ¿ May 30, 2012 18:50 |