Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

Glen Goobersmooches posted:

I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment?

I powered on mine with 305.67, worked great over the weekend.

Which reminds me, Agreed, would you mind posting a screenshot of your fan profile for your 680?

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Jan posted:

To further compound on this, with some personal experience:

I got myself a 30" (2560x1600) monitor a little while ago, while still running a Radeon HD 5870. Among other things, I was able to run Serious Sam 3 and Skyrim at high detail (not ultra), in 2560x1600, while easily staying above 30FPS. I haven't tried Crysis 2, Metro 2033 or any of the traditionally "taxing" games, but for the most part the old 5870 more than pulled its weight. The only game that had difficulties was SWTOR, but that's because their rendering engine is a massive turd.

I ended up replacing it anyway, but it wasn't for lack of performance as much as the fact that the reference cooler on it was like having a B-52 in the room.

Yeah, I had my 460 running 2560x1600, and just left AA off, perhaps some other settings dropped to medium. Now with a 670, I can max out all the settings, and even throw on some AA if I feel like it. More importantly though, stays up at 60FPS most of the time instead of slightly above 30 (and I play with V-Sync on).

movax
Aug 30, 2008

Lowclock posted:

I tried a few other programs, and other cards and slots too, and always get 2x in the top slot. If I move it to the middle one it shows 8x like it should, and benches higher, and it shows up as 16x in another motherboard. I tried messing with bios settings for a while, which didn't really make any difference either. I think it actually might be something wrong with the motherboard.

Yeah, it's possible there's a cold joint somewhere on one of the muxes or Tx-side caps on the mobo. If you can get Asus to advance RMA you can run it at x8 without a huge performance loss.

movax
Aug 30, 2008

Kramjacks posted:

So apparently MSI was overvolting their GTX 660 Ti and 670 Power Edition cards, which gave them performance gains but also caused some systems to fail to post or get black screens after a change in load.

http://www.tomshardware.com/news/MSI-GTX-660-670-overvolting-PowerEdition,18013.html

That circuit looks really weird, any other EEs floating around? I don't know where they sourced the diagram from, but it looks more like a design error than anything else IMHO. AC coupling the anode of a Zener regulator? :psyduck: If anything, that cap would be on the output of the regulator (between cathode and anode). Maybe that sheet in Allegro/whatever was really messy and they missed it.

e: looking at the datasheet of the RT8802A, I don't see what pin would benefit purposefully from getting >5V. I could see trying to game SS or a compensation network to get a more aggressive response (Type 2?) but that wouldn't make sense here.

I'd hold off on flaying MSI for now, though it still is a bit odd that something this simple (and from the reference design apparently) made it through.

movax fucked around with this message at 21:05 on Oct 8, 2012

movax
Aug 30, 2008

Wozbo posted:

http://www.xbitlabs.com/news/cpu/display/20110119204601_Nvidia_Maxwell_Graphics_Processors_to_Have_Integrated_ARM_General_Purpose_Cores.html

This is the first article that came up (there are many more, look up Maxwell architecture), but basically its automating off all the stuff that you have to do on a gpu but with a cpu + some nice things like preemption. I think they are also going to skip a fab step and go smaller, but I'm a bit lazy to look right now. If this pans out the way they say it does, its going to make somthing like 8k UHD resolution viable with crysis <foo> going full blast. More likely it will be a 30 - 50% boost on the first generation, with the "tock" generation right after adding another 20 - 30% on top of normal gains as they figure out what to optimize. If I remember correctly, its currently on track to be something like 8x the power of the current 6xx series, but not out till late 2014-2015. Would be cool to see prototypes on the new consoles, so we all get some awesome graphics for the next xxx years.

That's pretty slick, not to mention clever and relatively cheap. You get the netlist for the ARM core from ARM, free to implement it on your chosen process, and make whatever changes you deem fit. Since it's ARM, I could see them leveraging AXI to create an interconnect between the ARM cores and their logic, or their own high-performance bus.

It would have been pretty :geno: if they just threw ARM cores on there and left the usage up to software developers, but having them autonomously (presumably) control GPU functionality is pretty neat. Maybe instead of drivers executing a fuckton of MMIO and hitting registers, it'll be some kind of "dispatch workqueue" thing. (Just guessing, I've only ever worked with Intel drivers).

Maybe prettier textures will happen in-line with Maxwell release also, since the next-gen consoles should have more RAM & VRAM. Have to stress PCIe 3.0 bandwidth somehow!

movax
Aug 30, 2008

Alereon posted:

The really interesting thing to me is that there actually are no ARM cores here. Project Denver is an implementation of Transmeta's Code Morphing technology to execute ARM code on a custom-designed nVidia core. The original plan was to execute both x86 and ARM on the same cores, but Intel successfully sued to block this x86 compatibility, arguing that the x86 license didn't transfer to nVidia when they acquired the corpse of Transmeta.

Oops, I thought they were just licensing it, not rolling their own ARMvX-compatible core. Should be interesting times next year.

movax
Aug 30, 2008

Endymion FRS MK1 posted:

How does RMA-ing a card work if they no longer make the card? I returned a Sapphire 6950 I had bought last year for a fan problem, and doing a quick look on Newegg shows the card being deactivated.

In the meantime, however:


:suicide:

They'll probably send you a "better" one. Usually, this means the current-generation equivalent of where the 6950 used to live in the line-up. This can lead to hilarity in generation gaps where the newer card actually performs worse in certain games and conditions.

Then again, I bought a single 6800GT from eVGA and my murderous PSU caused me to RMA for a 7900GT, followed by a 8800GTS 640, so that wasn't too bad. :shobon:

movax
Aug 30, 2008

Alereon posted:

I'm pretty sure VRAM and system RAM do use the same address space, that's why 32-bit systems can only address 4GB-VRAM-all other hardware reservations worth of system RAM. This isn't relevant for the case of a 32-bit app running on a 64-bit system because Skyrim doesn't care about the VRAM, only the video driver does, and that's a 64-bit application.

VRAM is accessible via a PCI BAR most of the time. I can only assume that as VRAM sizes grew while 64-bit adoption was slower, that GPU manufactuers added some kind of psuedo-VM to address the full VRAM while only requesting a 256 or 512M BAR. So I guess you'd have 4 512MB pages on a 2GB card. On modern NV, I think BAR0 is command/control, BAR1 is VRAM; I don't know what the other BARs do off-hand.

The video driver is furiously executing memory read/memory write commands that get routed over PCIe to communicate with the GPU; this is why a GPU with an autonomous CPU on-board (like NV + ARM) gets interesting because it has the possibility of breaking tasks down into the video driver sending a high-level command over PCIe, and then the CPU on-board takes care of programming registers.

Most PCI MMIO space gets mapped below 4G by the BIOS to retain compatibility with 32-bit OSes; it'll remap any remaining DRAM above the 4G limit where 64-bit OSes can get to it. MTRRs will be set accordingly (on Linux, cat /proc/mtrr to see). 32-bit OS gets whatever RAM fits below the 4G barrier + PCI MMIO, 64-bit OSes get that + whatever memory is remapped at 0x100000000 and above.

It is totally possible on buggier BIOSes to brick your system as you add so much PCI devices that you literally cannot place any user RAM below 4G, and the MRC eventually ends up with TOLUD (top of lower usable DRAM) at 0.

e: discrete GPUs only consume address space whereas an IGP that needs system memory will actually steal physical memory for its needs

e2: Go to Device Manager, and the properties for your GPU. Go to the "Resources" tab and you'll see the memory/IO resources there. It should have some small amount of IO address space for legacy reasons, and the rest should all be BARs. At work my 256MB Radeon HD5700 for instance has:
0x(00000000)E0000000-0x(00000000)EFFFFFFF - 256MB BAR
0x(00000000)F7DE0000-0x(00000000)F7DFFFFF - Control registers?
0xDC00-0xDCFF (+ various) - IO Memory (completely discrete from memory space)

I have all those extra 0s because I'm on 64-bit Win 7. Looks like the control BAR got squeezed into the little region available near the top of lower memory where a lot of tiny BARs end up on Intel platforms.

Linux, just do lspci -v.

movax fucked around with this message at 20:59 on Oct 17, 2012

movax
Aug 30, 2008

Jan posted:

Since the subject piqued my curiosity, I did some extra research, and that does sort of match what I've found. What I'm unsure on is that while bus I/O (AGP, PCI-E or otherwise) does seem to require some shared memory (for memory mapped I/O, at least), there shouldn't be any correlation between the amount of VRAM a GPU has and the amount that mapped space will take up. All it does is create a buffer through which the CPU and GPU can communicate, and there's no point making this buffer larger than bus bandwidth.

It's not really clear to me how much of this memory responsibility belongs to the program, the GPU driver or the OS... I hadn't realised how much simpler unified memory (on 360) is. I will definitely have to read that article.

The problem is on a 32-bit OS with only 4GB of addressable memory, you very quickly run out of address space for physical DRAM when you have to devote gigs of memory to memory-mapped I/O. Theoretically, you could lose up to 256MB of addressable memory just for the entirety of PCIe config space, if you wanted to support all of it.

e: theoretical max:
255 buses * 32 devices * 8 functions * 4KB config space = 256MB

movax fucked around with this message at 21:05 on Oct 17, 2012

movax
Aug 30, 2008

KillHour posted:

I don't know why they don't just program games to be 64 bit nowadays. Is anyone really still using a 32 bit OS for gaming?

I could see a few reasons:

- Engine / toolkit might not support it. Developers like CryTek might not care about this so much, but other houses that license an engine might be limited by the version/release of the engine they are using

- Going to a 64-bit release means that no 32-bit system can run that code. Deploying both versions doubles your QA load, I would imagine. Not to mention I'd imagine some pointer hell would be involved as you figure out what's broken on each version.

My first point might be moot though, as obviously some engines are capable of targeting PS2/PS3/Wii/X360/etc with the same codebase. A gamedev could probably comment better than I can.

The biggest benefit of 64-bit (IMO) for "most people" is that a 64-bit OS essentially removes any limitation on addressable memory. Some chipsets / platforms "only" support 40/48 bits or so, which is still a stupid large amount of memory. Once everyone's on 64-bit, you could have PCIe devices exposing stupid large BARs that encompass their entire onboard memory without any paging. Who cares about burning 8GB of memory space when you're not going to run a 32-bit OS and have exabytes of memory space?

movax fucked around with this message at 21:33 on Oct 17, 2012

movax
Aug 30, 2008

Agreed posted:

What am I missing, he just said he's running at PCI-e 3.0 16x and 4x, is he mistaken or have I misread?

Typo I assume, a P55 mobo wouldn't have PCIe 3.0 support.

A card strapped to a x4 link off the PCH also has to travel DMI to the CPU.

movax
Aug 30, 2008

Alereon posted:

I'm not going to pretend I was able to understand this, but is there maybe a difference between the "right" way to do things and the way it gets done in practice? Or is that what you're saying? I've never seen a system with 32-bit OS and a discrete videocard have more available RAM than what would be expected from 4GB-VRAM-other hardware reservations, and it seems like if it was possible to do without a hell of a lot of development they would have for the competitive advantage.

Sorry, I was just finishing up patches to an internal platform that corrected some issues with BIOS MMIO assignment so the acronyms just kinda flowed :shobon:

Not quite sure what you're asking, but in terms of PCI MMIO (memory-mapped IO) there's nothing terribly special about a GPU, other than them being the consumer device most likely to take a huge bite of address space. Your ethernet controllers and such also eat up MMIO space, but their BARs are less then a megabyte in size.

In my case, I have custom data acquisition hardware that eats ~128MB worth of MMIO per unit, and supports up to 8-10 of these things hooked up to a given system. The customer has a 32-bit Linux kernel that they have no plans to upgrade from soon, so they have to suffer with only ~2GB of usable RAM in the system.

Chuu posted:

One more question about 4GB cards, if you have two 2GB cards in SLI, are the textures duplicated or is it essentially the same memory addressing as a single 4GB card?

Not sure actually, that's an interesting question. From a hardware perspective, I could see the GPUs recognizing that there is a SLI bridge special and changing the BARs they request appropriately.

Agreed posted:

Can't last, I understand that, this is REALLY looking like the last generation where PCI-e 2.0 8x is going to offer performance in the vast majority of scenarios within a few percentage points of PCI-e 3.0 16x. But it does make much more comfortable waiting 'til Haswell for my next major system upgrade. Probably just need to just enable the Marvell SATA controller for my optical drive to free up one SATA slot in case I need to expand storage, otherwise it looks like I should be fine until Haswell. Which is pretty exciting in itself :dance:

The downside of PCIe 3.0 is that it's pricier to develop. Due to the increased speed, you need probably a 12.5GHz or 16GHz scope to properly debug signal integrity issues. Granted, a lot of the cost of testing is eaten by companies like Altera or Xilinx (plus the usual Intel, AMD) that developed PCIe 3.0 IP and validated their transceivers/solutions against PCI-SIG specs.

At PCIe 3.0 speeds you have to use a time domain reflectometer, an accurate model of your board (HyperLynx SI or similar) or brute math to get the s parameters of your board (Touchstone file) and properly apply emphasis/de-emphasis to your captured waveforms.

Basically PCIe 3.0 is fast as hell and requires some investment in development tools and increased development time. It lowers pin-count sure, but a lot of companies will still find it cheaper to push out PCIe 1.1/2.0 devices, especially if they started developing their ASIC with older-generation IP and SerDes.

The lower pin-count is awesome, but peripheral vendors need to catch up. Think of how many RAID/HBA controllers you could run from 1 x16 PCIe 3.0 link, heh. Could even throw a PCIe switch into the mix to use as a bandwidth bridge.

e: BAR is Base Address Register. Software writes all 1s to this register, and then reads the value back. Hardware ties certain bits to 0, therefore reporting to the host system how much memory it wants.

e2: Yeah Jan, going up to PCIe 2.0 even should result in a nice performance boost for you.

movax
Aug 30, 2008

Professor Science posted:

(important note: one PCI device can have more than one BAR, and there are fun alignment restrictions with BARs that may cause the actual physical address space consumed to be much greater than what you expect versus the sum of the size of the BARs)

Right, that's why I mentioned older Nvidia cards having multiple BARs, one of them being command/control, one (presumably) aliased to VRAM, etc. At least that's what the Noveau docs seem to suggest up to NV50 or so. Type 0 header allows for up to six 32-bit BARs, though I haven't run into a device with that many in the field. BAR alignment as described (hardware tying certain bits to 1 and software writing 1s/reading back) ends up being power-of-two, so yeah, if you need 90MB you end up burning 128MB.

movax
Aug 30, 2008

Mierdaan posted:

Excellent, thanks. I also found this old brief describing the differences but it looks like it's from 2003. Gives me a good idea of how they treat the two different lines, though.

Basically when you pay for the Quadro, you get the ISV certification, ECC memory and you know that software vendors qualified against your particular card. Should cut down on compatibility issues as well.

It's somewhat an example of executing the whole "we can sell the [essentially] same product to w people for $x, and y people for $z, so let's do both!".

movax
Aug 30, 2008

Rakthar posted:

After having great luck with two Nvidia cards in a row (8800 GTS, then a GTX 570) I decided to try out a GTX 680. I got an EVGA model from Amazon and plugged it right in. Almost right away I noticed this weird hitching / stuttering that was present both during gaming and video playback. If I tried watching youtube videos or regular videos, about every 30 seconds there would be a noticable stutter and then it would resume smooth playback.

In games my FPS was much higher than with the 570 but again, same issue. About every 30-60 seconds it would seem like I was getting 4 FPS - but Fraps did not bear this out, it claimed my framerate was steady. The effect was super noticable and super annoying. Even my roommate commented on it when we were watching some TV shows together. I tried the whole works - clean reinstalls, registry sweepers, driver cleaners, changing settings (disabling vsync) and none of it helped.

I went back to the 570 and the problem completely went away. I have a 600 watt PSU and an i7-2600k and I don't believe either are the issue - it was when the card was not under load or almost completely idle that it would happen most often.

When I started googling '680 stutter' it seems this is a very common issue with 670/680 series cards, and (my speculation here) from what I read seems likely related to the power management / voltage throttling that Nvidia introduced with this series of cards.

I would like to upgrade my GPU but am kinda lost here. I could try a 670 but that's less of an upgrade than the 680 and still seems to suffer from the same problem - although less often it seems. I was looking at a 7970 but I have concerns about ATI's drivers and overall stability, with issues like the frame latency being top of mind.

Does anyone have either any more info about the stuttering issue with the 670 / 680 cards, or suggestions on how the 7970 performs based on their experience with it?

Did you fully clean drivers before upgrading?

Do you have any OCing software (like EVGA Precision) installed?

movax
Aug 30, 2008

parasyte posted:

Though if you were out even longer than that, ATI originally had 7000-9000 series cards (R100 and R200 chips back around the turn of the century) and now are back to that.

Radeon 8500 All-In-Wonder! 7500 and 8500s were getting their poo poo kicked in by GeForces until R300 (Radeon 9700) launched with Doom 3, and you'd get insane performance improvements over the GeForce 4 with AA and AF maxed.

And it was codenamed Khan :black101:

movax
Aug 30, 2008

Agreed posted:

If Intel gets their driver poo poo together, they stand a chance at owning way more than any one company should. Too much integration. Not... sure if they'll be allowed to keep it at that point, the bread and butter of the graphics companies are that performance range, much as enthusiasts would like to feel included. Do I want a laptop with great battery life and extremely good graphics performance compared to current options? Absolutely! Do I want Intel to own the world? Don't feel they've earned it. I don't think it's sound economics to allow monopoly on the grounds of contemporary success, regardless of how impressive it might look at the moment; the long run does not extrapolate cleanly from the short term, and it's bad decision-making to just trust a company to keep besting themselves when over time that self-interest folds over to a stronger motivation to be profitable (a motive that no longer requires extraordinary innovation, just moving things along now and again).

P.S. for FF, I'll get in touch with you soon, man, just... a lot of poo poo going on. Thanks for reaching out, sorry this is my first word back.

Hey, you're alive!


eames posted:

And then next up there’s Broadwell which will apparently bring another 40% IGP improvement on top of Haswell.

Suddenly articles like this don’t look so stupid anymore.

Why is Intel suddenly so successful in the IGP department after trying (and failing) so hard for so many years? :confused:

FWIW I just got a ~80 slide Intel roadmap for 2013/2014, and I'd say 30 slides were only about graphics stuff. They ain't loving around, which is nice because dGPU's in notebooks slaughter battery life like none other. With no dGPU that should let notebook vendors use simpler stuff like eDP as well, which can't coexist (at least of a few gens ago) with using the PEG port of the CPU.

movax
Aug 30, 2008

roadhead posted:

AMD falling all over themselves to do it cheaper - and having APUs available for prototype/dev kits immediately? Just wild speculation on my part.

I guess AMD would be a one stop shop for your CPU & GPU needs as well.

movax
Aug 30, 2008

chippy posted:

This isn't an upgrade question, but it is a part-picking question. I thought it might fit better in here than in the system building/part picking thread, but feel free to tell me to gently caress off there if it's not.

Basically, I just resurrected my old gaming rig to turn it into a media/semi-casual gaming box for the living room TV. It's got an ageing 8800 GTX in it that has seen better days, I think it's dying of heat death, and when playing games on it, the picture cuts out every now and then (as in, the TV goes into a no signal state for a few seconds). This is the fault that prompted me to replace the machine in the first place.

Anyway tl:dr, I just want a drop-in replacement for the 8800GTX from the last few generations that'll hopefully perform similarly so the machine can still be gamed on but not cost a huge amount. I don't mind going second-hand or a bit older. The TV it's plugged into is only 1280x720 so it doesn't need a huge amount of horsepower although I may pick up a 1080p telly at some point. Something bit quieter than the 8800 would be nice too.

Hmm, I've just realised I could just pick up an 8800GTX on ebay for about 30 quid. Still, it's noisy as gently caress, so I'd probably prefer something slightly more modern. Any suggestions?

What's your budget like? You could probably get a used Fermi card (GTX4xx/5xx) for pretty cheap from someone who's upgrade, and if it's halfway-decent OEM, the cooler should be pretty drat quiet. My eVGA GTX460 was usually inaudible even under heavy load.

movax
Aug 30, 2008

Reminder to check NV's website for beta drivers that'll probably boost Bioshock performance quite a bit; if they're not out yet, I'd imagine they're due very early next week.

movax
Aug 30, 2008

Factory Factory posted:

I'm thinking we might need an FCAT and frame latency writeup for the OP. Y/N?

I think that's a good idea, I was thinking about it the other day and a bunch of the stuff is out-of-date as well. :(

movax
Aug 30, 2008

KillHour posted:

Damnit, Epic, why don't you ever release your tech demos to the public?

Rhetorical question I'm sure, but fairly certain there is a very specific combination soup of drivers, hardware, etc in use for the demos :)

movax
Aug 30, 2008

beejay posted:

Has anyone else been having trouble with nvidia 314.22 drivers? I get "freezes" for 1-5 seconds every game when playing Starcraft 2 and driver crashes at least once a day. I only play Starcraft 2 right now and last night it caused me to drop out of a game once and then froze my game badly enough that when it recovered all my stuff was dead. Not fun. Anyway I went looking through event viewer and this started on 3/16 which was when I installed 314.21 beta.

I recently saw a test on Tom's Hardware where they were using 314.21 and then rolled back to 314.07 drivers instead because they were having problems with weird "pauses" in the game.

Last night I rolled back to 314.07 myself and played a few more games and had no problems, but it will take more to be conclusive. I only play a couple hours a day so it's not like I'm asking a lot! I hope they come out with some new ones soon that solve this. I don't like being behind on drivers. Could it be anything other than the drivers? Edit: This is Windows 8 64-bit with a Geforce 660 card. I freshly installed Windows 8 on a new computer build in early March and had no issues at all until 314.21 on 3/16.

Yeah, I feel like my secondary monitor performance (two off a GT210) has gone waaaaay downhill after I installed 314.22 for Bioshock Infinite. Hopefully the next release fixes whatever regressions appeared.

Endymion FRS MK1 posted:

Here's an article showing the top 10 most important graphics cards.

Some... odd choices. 8800GTX instead of the GT? 5970 instead of 4870?

And of course the final one, the GTX Titan

Kind of a lovely list; starts off early which is fine, but skips a lot of notable cards like the 9700 that were huge steps up / disrupting forces in the market. But it's PC World, so :v:

I think I did a write-up on the Riva 128 or TNT2 earlier in the thread, I was hoping to do a series of "retro" posts, but man, writing words takes time.

movax
Aug 30, 2008

A 2GB 660 Ti should be able to handle 1080p with settings maxed/close-to-max (excepting some insane anti-aliasing/other special effects), so if you're not seeing the performance you expect, it's possible it's your G840 bottlenecking, but that is a Sandy Bridge based Pentium, so it's pretty much brand new when it comes to the current crop of games. I wouldn't blame the 660 though; do you have a ton of background tasks running or something?

movax
Aug 30, 2008

I don't think we should necessarily be recommending away from AMD cards (this reminds me that OP recommendations are probably in need of updating), and they certainly deliver a better value at certain price points (especially when there's 2-3 AAA games thrown in). I think single GPU performance at say 1080p is still solid for most gamers, assuming drivers are stable/ready at that point.

movax
Aug 30, 2008

Agreed posted:

There is a very good chance I will phone-Skype you from the hospital high as a kite. Shouldn't be a long stay at all (and thank god, medical bills are insane, insurance covers far too little... bleh, there's a reason I want to upgrade to a new card out of principle but I can't, hah) but it is inpatient. They pump the good poo poo while you're inpatient. So, uh... prepare for that, amigo. Haha.

Also I missed this, well wishes Agreed. Please tape your high-as-kite rantings about GPU overclocking to share with us afterwards :allears:

movax
Aug 30, 2008

Just updated to 320.18...to be fair, the update took unusually long compared to any previous driver install and my cards beeped several times as well. No other signs of damage / non-functioning though.

movax
Aug 30, 2008

Nice performance improvement on the 770, but my 670 is less than a year old :negative:

I think I will probably hang on to it until Maxwell, and eek out all the performance I can from OCing. 2GB of VRAM is definitely on the border for 1600p :(

movax
Aug 30, 2008

Agreed posted:

Awesome!

Do you guys think I should formalize a Boost 2.0 overclocking guide? Right now all my thoughts and info are kind of spread around among a bunch of posts, and not highly organized.

Heck yeah; I missed your guide a few pages(?) ago apparently, PM me links to those and I'll totally throw them in the OP.

movax
Aug 30, 2008

Agreed - I totally did get your PM but got carried away fixing my DNS and hosting which I discovered was broken when the OP lacked any images whatsoever :downs:

Updated OP and thread title sorry for the delay :angel:

movax fucked around with this message at 01:44 on Jul 19, 2013

movax
Aug 30, 2008

loving nvidia...trying to install R331 drivers and it rendered my system completely unusable...can't uninstall drivers, can't reinstall drivers, can't boot normally; only boot mode that works is 640x480.

I hate computers.

movax
Aug 30, 2008

necrobobsledder posted:

For everyone else that had a problem after running nVidia updated drivers, I was able to get out of being forced into safe mode to do anything by uninstalling all drivers and nVidia and ATI software completely (including ATI - I had both ATI and nVidia drivers present briefly) and cold booting each time for each configuration until I had finally installed drivers. Granted, this may not be the proper solution, but I tried reinstalling drivers like everyone else and that didn't work whatsoever. I'm suspecting something is wrong with a number of users' configurations that nVidia didn't quite test for. It kind of pisses me off because I'm not exactly a crazy power user when it comes to my GPUs and I've stuck with mainline driver releases, so I shouldn't have had any problem at all. Maybe it has something to do with using a GTX 680 that I'm not aware of but seeing that people on even 460s getting similar problems, I'm just going to leave the judgment at "nVidia QA hosed up"

Yeah this is pretty much how I fixed my problem...it just took like three loving hours. Everything seems normal now.

Nothing too crazy about my config either...one GTX 670, one GT 210, three monitors. Though recently I've been driven nuts by incredibly lovely 2D performance (especially if Flash/GIFs are on web pages) on the displays run by the GT 210; is it really that lovely of a card it can't handle some 2D workload? I wonder if the x16 slot I shoved it in is secretly a x1 electrical...

movax
Aug 30, 2008

Alereon posted:

It's hard to appreciate how truly lovely a Geforce 210 is: We're talking 20% the performance of Intel HD Graphics at best. You're definitely not going to get acceptable performance even for basic 2D. In particular, it has as little as one eighth the memory bandwidth of your system RAM.

Ugh...it's a passively cooled GT 210, MSI I think.

Then again, I bought it when I had a GTX 460. I should see if I have the cables to make a 3 monitor setup happen on my 670, only plan on gaming on one of them. Any horrible downsides?

movax
Aug 30, 2008

Factory Factory posted:

Higher idle clocks if the screen resolutions are mismatched. Power equivalent to, eh, running two or three CFL bulbs.

:sigh:

If only I had Z68 mobo, this wouldn't even be an issue. Hooked up one more display to the 670, gonna see how it works out now and if it murders my FPS by any appreciable amount.

e: loving Nvidia, how do you sell a loving GPU spikes to 33% usage displaying a browser window where the only moving element is my avatar

jesus

movax
Aug 30, 2008

Also, yes the OP is out of date, Factory Factory and/or Agreed (and all the other regulars) do you guys want to kick off a new one?

movax
Aug 30, 2008

Bloody Hedgehog posted:

Because the drivers recognize that a browser is open, and therefore ramps up the clock speeds to account for the fact that browsers all use GPU-accelerated components now, no matter if the webpage being displayed is a text file or a video heavy multimedia site. I haven't used AMD's stuff for a while, but I'd imagine their cards/drivers do the exact same thing.

Interestingly, was having some trouble with Chrome playing back Flash, so I switched over to IE...barely pegs either GPU. I guess I'll leave the GT 210 hooked up running my one email/IRC-only display.

Factory Factory posted:

Good lord, I'm still plodding away at the overclocking thread rewrite. I'd be happy to work on tasks you want to put in front of me, but I can't take on a full rewrite.

I think a lot of the content is still good, we just need to keep the first post w/ recommendations up to date. If you can let me know what to throw there (I've been under a rock when it comes to what AMD's doing) I can defintely do that.

movax
Aug 30, 2008

Agreed posted:

No. Here, let me try again, with a little more brevity.

This thread has been around for a while, and what started as a sort of inside joke of "do what I say, not as I do" because I tend to prefer higher performance and am willing to pay a price premium for it, in the context of a forum where we almost always encourage people to value price:performance most heavily, has turned into more than it was ever intended to be. This is especially problematic for me because newer folks who come here don't understand that I actually do know what I'm talking about, and do my best to help others with analysis, etc., in addition to participating in the more interesting discussions, which are unfortunately becoming less frequent - the level of discussion surrounding AMD's launch conference and nVidia's Way It's Meant To Be Played conference never really got particularly exciting, and we didn't talk about the AMD developer conference last month at all. Some of the SH/SC industry insiders and experts just don't really come around anymore for one reason or another, and it's a shame.

Even so - there's still good things to do with the thread and discussion. I want to continue to be a part of that. But the downside of less esoteric and more accessible discussion is that many of the new people coming in could very easily see a bunch of jokes and get the wrong idea about who I am. I don't like being portrayed as just foolish, even if it can be said to just be a joke, the thread has been more about recommendations for cards or post-your-recent-buys lately and that's brought in a lot more traffic than it used to get and plenty of new faces, who have no real context for "oh, that's a joke" and might think I actually do just do all sorts of dumb poo poo for no good reason.

If you didn't know me and know that I pay really close attention to keeping tabs on new developments in graphics cards and rendering tech, or that (if I can talk FactoryFactory into it, as he's quite busy!) we'll be taking over the OP from Movax by his request in the near future, you could easily come away with the impression that I'm just another [H] idiot who you probably just shouldn't listen to.

I would prefer for that not to be the case. That's all.

Sorry to bring this back up, but I've been super busy lately with a new job (and I don't do much directly with GPUs anymore), but I will try to be around more often, and Agreed/FF you can definitely take over the OP whenever you want, it's woefully out-of-date.

movax
Aug 30, 2008

Gwaihir posted:

We have a post the pictures of your ~real~ desktop thread, what kind of hardware forum would we be without a "Post the pictures of your desktop(pc this time)" thread?

veedubfreak posted:

So UPS was kind enough to leave my 800 dollar package from Mountain Mods at my door last night without even having the courtesy to ring the doorbell. God I loving hate UPS. Where should I post a build thread?

Go for it, and if it turns into a twisted parody of [H] build threads, I'm OK with that too. We could use more threads than just the megathreads floating around. :science:

movax
Aug 30, 2008

I missed the Ambilight clone discussion; don't all commercial solutions use FPGAs? Nothing else will be OS/driver/PC-independent I assume. I take it the real TVs that can do it take obvious advantage of the fact they have access to the raw display data internally.

Wonder if it's worth Kickstarting...

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Factory Factory posted:

Probably not, since Phillips has US patents.


The block/pump assembly isn't the problem; it's the clearance of the decorative part of the mounting bracket and the extra fan you install to cool VRMs and whatnot that the CPU block doesn't cover.

Oh, well that sucks. Maybe do it in a way where it just happens to be a device that sits in line and outputs color data...what you do with it is up to you. Could be a fun project!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply