Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Phuzun
Jul 4, 2007

The GTX 770 has max temp of 98C, 81C is not to bad under heavy load for an extended time. Aftermarket might be an improvement, but it doesn't seem like you need an immediate solution. If you want to watercool, just spend some time researching and planning a layout.

Adbot
ADBOT LOVES YOU

Phuzun
Jul 4, 2007

cat doter posted:

Doesn't seem to be, I ran OCCT again with GPU-Z and under 51% load for 3 minutes it reached 91C.

I don't think the temperature is the cause since it does it at idle or watching videos, it might be memory related, since I've seen it do more often under load in games, but using this program hasn't caused the problem.

There's a memory option in OCCT, can I just make it use more memory or something? Would that help diagnose if it's the card's memory?

The only memory options I see in OCCT are for the CPU.

For GPU, there is memtestCL and memtest80. But I don't know how good this works since I can put a several hundred mhz increase on my memory and not see an error, yet games will absolutely poo poo themselves the second they try to render at the same memory speed.
https://simtk.org/home/memtest/

As far as the issue you are having, that sounds exactly like the video driver stopping. Try different drivers with clean installs. Other things, re-seat the card in the motherboard and try another power supply (preferably larger).

Phuzun
Jul 4, 2007

Blurbusters has a good demonstration to give you an idea of how g-sync will look.

Just ordered up the MSI GTX 780 Twin Frozr that is on-sale on Newegg and an EK waterblock for it. Comes with a mild oc already, so I'll see how much further I can push it. I'll be moving my current GTX 570 down a few slots and just let it fold non-stop.

Also, nice write up on the differences in the products, Agreed. Seems like every only focuses on the fps/benchmarks and don't consider the rest of the card functions. Shadowplay/built-in encoder is something that kept me with nVidia this upgrade.

Phuzun fucked around with this message at 19:20 on Dec 17, 2013

Phuzun
Jul 4, 2007

Got my MSI GTX 780 installed last Friday. Stock is 902mhz(1050 boost)/3004mhz and with the factory air cooling, I got it up to 1176mhz(+126)/3499mhz with a 25mv voltage bump. After a good heat soak from folding@home, it was getting up to 76C with the stock fan config. The EK full coverage block just showed up in the mail and will be going on this week.

Also picked up some new fans as the ones I had been using were developing a really annoying bearing noise. These fans are pretty nice, low air flow for my quiet power radiators and don't seem to mind being horizontal or vertical for mounting.

Phuzun
Jul 4, 2007

Well I had both my GTX 570 and 780 installed with the idea that I'd keep the 570 folding 24/7 or have it for Physx games, but I ran into several issues and pulled it back out of this machine. First, folding@home does not use the normal index for the GPUs... but it does for CUDA/OpenCL indexes. This results in all kinds of fuckery with the config.xml to find which settings will get them to fold, but then I ran into issues with GPUs folding, and the work units just disappearing when pausing them. Second, the waterblocks don't line up (pretty much expected this), so the water was looping through the 780 before hitting the restrictive 570 block versus having the flow split between them. This resulted in adding about 15-20C onto my CPU and 780 temperatures.

While there was some performance improvement from the dedicated Physx card, it was insignificant to the raw power of the 780. Batman origins already runs maxed out in the 60-90fps range and with the dedicated Physx card, it gained around 10fps. I'm going to throw that 570 into another machine at this point and just work on overclocking/BIOS tweaking the 780.

e: on the topic of Shadowplay. Has anyone encountered an issue where the audio is not captured. Or if it is captured, it is only the rear/surround audio? Figured this out. Asus Xonar DSX has an option called GX and this is what was causing audio to go missing in Shadowplay.

Phuzun fucked around with this message at 17:09 on Dec 30, 2013

Phuzun
Jul 4, 2007

sedaps posted:

I just purchased 2 X-Star DP2710 monitors hoping for a dual setup with both running 1440. Unfortunately I only have one DVI port on my Radeon 7870. I was about to just get a Mini-DP to DVI adapter that's about $100, but then I thought what if I get a new card?

So I'm seeing all the GTX 660/670/680 cards have dual DVI outputs. Is it worth it for me to upgrade from Radeon 7870? Or should I buy an adapter for $100 now and then wait for a more significant bang for my buck a year down the road.

Would these cables not work for your card/display? They are much cheaper than $100.
http://www.monoprice.com/Category?c_id=102&cp_id=10246&cs_id=1024604

Phuzun
Jul 4, 2007

sedaps posted:

I think those are single link DVI. This is the monoprice version of what I need, although the reviews are not so great on them from some research.

Yeah, looking at them now, they do appear to be single link. You had mentioned the GTX 600 series, but AMD also has cards that feature 2 dual-link DVI outputs, including other 7870s. Instead of buying an adapter, you could get a second 7870 for the extra performance and even have a third DVI output for later.

Phuzun
Jul 4, 2007

With the way releases tend to be staggered between the two brands, there is simply always something around the corner. Upgrade when you got money and desire for a bit more performance. 760 recently got a price drop, thanks to the ATI release. Don't expect an 860 to be as cheap as the 760s are when it comes out, should that be March, which could easily turn into May.

Phuzun
Jul 4, 2007

craig588 posted:

You can install second card and flash the original bios you backed up if you messed up any of your edits. The 780ti still uses basically the same bios as the Titan which is still incredibly similar to as far back as the 680. You really should edit your bios yourself instead of flashing someone elses. Here's the Google Translate version of the tool take makes Kepler bios editing really easy. I don't remember any of the offsets to do it manually anymore since that tool makes it so easy. I wouldn't recommend using the recently released software patches to force higher voltages though afterburner because you're fighting against the power management implementation and are probably going to waste a whole lot of heat. A potentiometer hard mod would be a better idea if you really wanted more than 1.21. That way you're just giving the GPU a set offset voltage and the driver/software doesn't know anything about it.

I've actually been messing around the other direction. I have my 680 down to using only .9v under low load situations.

This tool doesn't seem to allow you to modify the voltage table. Everything else seems open, but that is pretty important if you want to overclock higher. There are BIOS files that have been modified with an increased voltage table.



Without going into further fuckery with voltages, I can hit 1320 core, 3456 memory this on my board. I haven't tried it, but 1.4v is possible with BIOS and softmods. It absolutely crushes things at 1920x1200 and that has really kept me from doing further tinkering.

Phuzun
Jul 4, 2007

craig588 posted:

You really don't want to do that with softmods. If you want more voltage you really want to do a mod with a potentiometer that the software and driver doesn't see. The important thing to edit in the bios is the power limits. Those premodded bioses have boost disabled which was one of the best power saving things that the Kepler generation started.

I guess that isn't my concern. I always keep it folding or gaming when running, which just happens to be always.

e: always

Phuzun
Jul 4, 2007

The R9 270X is a rebranded 7870, if I remember correctly. I think these are still desired for coin mining. This could explain the lack of supply.

Phuzun
Jul 4, 2007

ItBurns posted:

I'm pretty sure I've seen other MSI cards listed as being discontinued, so there might be something there. NCIX in general is slow and unresponsive compared to Amazon or Newegg, not unbearably so but they're not really 'on the ball'. The combination of the two is just asking for trouble. I had the same issue with the 280xes before the holiday, and have just decided to stick with my card until something better comes out or the ltc miners get their power bill and throw in the towel.

Edit: I think the 760 is probably the closest in-stock card to what you'd pay for/get out of a 270x.

Well he does want that BF4 pack-in. Newegg.ca has the XFX 270X with the game code and is the cheapest available. I certainly wouldn't wait on NCIX any further at this point.

Phuzun
Jul 4, 2007

Duro posted:

Well, since I already paid NCIX I figured I'd stick with them for now and just make them switch the card out

NCIX allegedly has the Asus, XFX, Sapphire and Powercolor in stock

Which is better? I really don't follow hardware manufacturers that much (I've been out of the game for a while) so I don't know which is quality and which to avoid. I think I would have been happy with the Hawk cause it looked well built, but what can we do.

Someone told me about these miners buying this card, it's crazy that it might be a reason why these cards are mostly out of stock

I've used XFX several times when they made nvidia products. They didn't let me down and still work if swapped in the water loop.

Phuzun
Jul 4, 2007

Don Lapre posted:

Id just spend the money on a better quadro card and game on it.

Unless I am misunderstanding the naming format for Quadro cards, you go from $1,799 for a 1536 core K5000 to $4,999 for the 2880 core K6000. You could easily afford to get the K5000 and a GTX 780 Ti for specialized tasks, instead of getting an equivalent Quadro for gaming.

Phuzun
Jul 4, 2007

Gyrotica posted:

I have a question about the GTX Titan. Are there any neat/fun/useful things one can do with a Titan that makes use of its compute capacity? My rig is primarily built for gaming and I'm not a molecular biologist, but I can't shake the feeling that I should find other uses for it since I have the capability.

I briefly dallied with cryptocurrency mining, but apparently CUDA just doesn't cut it.

Folding@home or the various grid projects, if you'd like to contribute to science without expecting a personal return.

Phuzun
Jul 4, 2007

Adding a second NVIDIA card for physx processing is as simple as installing it to an available slot, then installing the drivers (reinstalling). The default is for the driver to choose the best card, but you can force it in the NVIDIA control panel.

Phuzun
Jul 4, 2007

exquisite tea posted:

Is it normal for the Geforce Experience optimization tool to randomly switch around its "optimal" configuration without any changes to the system? Last week it was telling me to play AC4 with high shadows, no SSAO and FXAA, now it's saying medium shadows with HBAO 2x and MSAA, neither of which seem to play any better or worse than the settings I was using initially.

Maybe check the setting that auto-updates the game configurations. Never used that myself, but that sounds like what it says it does.

Phuzun
Jul 4, 2007

Agreed posted:

It's a totally different matter talking about stability under GPGPU conditions where math errors manifest more readily - a Memtest86-like program for GDDR5 would be pretty damned revealing even when pointed at overclocks that we as gamers probably consider stable, throwing around bit errors left and right.

There is a MemtestCL that works on the gpu memory.
https://simtk.org/home/memtest/

Phuzun
Jul 4, 2007

The waterblock they are using on the hydocopper version is garbage as well. No VRM cooling at all. If that was even a remote consideration.

Phuzun
Jul 4, 2007

Until every single chip that is made, works at the very best potential speed, overclocking will exist. Overclocking gets so out of focus with the internet and instant gratification of community benchmarks. It isn't that exciting, because the only people that have to constantly make changes to their machines, don't know what they are doing or only care about pushing it as far as you can for some benchmark. Set it up and test over the first few weeks, then let it go. A good overclock lasts for years. Don't mistake careful marketing and binned processes for actual processor capabilities, they are all unique.

Phuzun
Jul 4, 2007

Nephilm posted:

The discussion here is an effort/benefit balance. As the efficiency of "auto overclock" on CPUs/GPUs improves and thermal constraints get narrower, for the end user the cost entry bar gets higher while the potential gains lower. Pretty soon it's just going to be "get good cooling and let Boost do its thing" (and the definition of good cooling is also changing as parts need less heat dissipation), with little sense in getting costlier unlocked components or BIOS tweaking except for, as you mentioned, push-it-to-the-limit enthusiasts.

No, this is just the mainstream being confused about power saving circuitry. Running something below a known max speed, then dynamically increasing to that under load, is not overclocking.

Phuzun
Jul 4, 2007

Nephilm posted:

...correct me if I'm completely wrong, but for example Nvidia's GPU Boost, it isn't a "throttling down from a known max speed", but boosting clocks up until hitting temperature or TDP constraints - the former being a variable factor based upon cooling solutions implemented (which is what I mentioned), and the latter a manufacturer-specified limit that, as has been brought up by other people, has an ever narrowing fringe between the spec and how much a user can push beyond it without causing actual hardware damage.

Boost clocks are set by the manufacturer and are binned specs, not individually tested and set. Overclocking is pushing it further than the manufacturers specifications and each chip is unique in capabilities.

Not trying to handwave anything. The technology in running these cards efficiently is great. But overclocking has a definition and it isn't the same as boost.

Phuzun fucked around with this message at 20:20 on Feb 21, 2014

Phuzun
Jul 4, 2007

Nephilm posted:

Did I forget to put the quotes on my post?


Oh, no I didn't, it's all good.

Every single point brought up by everyone but you has been correct. Stop being pointlessly anal about semantics.

Oh poo poo, I missed the quotes. I didn't realize I was arguing against a generalization, guess I'm just to loving anal. Thanks.

Phuzun
Jul 4, 2007

This is going to be a nice upgrade from my current GTX 780. Just gotta wait for the 3rd party and water blocks to keep using my custom loop.

Phuzun
Jul 4, 2007

This Founders Edition drama has been so great. They have been doing the same thing for awhile now, where they send out reference cards to 3rd parties for initial release and then they are supposed push out custom stuff as they are supplied with only the gpu chip after the initial release. The addition $100 is new, though it is hard to blame a corporation for taking advantage of demand since they are heartless entities run on profits. They should release a no heatsink model for an extra $200 just for the press and pubbie drama.

Was honestly hoping they'd give us a Titan card with HBM2 right away, though I guess I should be relieved that my wallet isn't taking that hit. I have no intention of waiting for a Ti model since I got my 780 when the Ti came out and kick myself for not making the jump a year earlier since the price drop wasn't huge.

Phuzun
Jul 4, 2007

They can't make their own? I'll wait and see, though it is usually more expensive for the pre installed block and then you have no air cooler for selling used or putting in another computer later.

Phuzun
Jul 4, 2007

EVGA GTX1080 SC was up for $649, snagged one and when I refreshed a few minutes later, it was already sold out again. Pure luck that I happen to check. Should be a notable upgrade from a 780.

Phuzun
Jul 4, 2007

Last 3 have been Zotac and had no issues. I do like their warranty since I plan to put a water block on this, though I've never had to warranty a video card and the last 3 had blocks as well. EVGA was actually in stock and not founder's edition.

Phuzun
Jul 4, 2007

Got my EK waterblock installed and let it run through the 3GB memory burner in ScannerX after some quick (no voltage) overclocking. After 45 minutes it was at 2100MHz (max 2151) and memory held stable at 5500MHz, the GPU hit a maximum of 57C (Also had IntelBurnTest running on 2600k 4.5GHz). This is +150 Offset and +550 Offset on memory for an EVGA GTX1080 SC. I'm pretty happy with this after running a GTX780 since the price drop/release of the 780ti.

Phuzun
Jul 4, 2007

THE DOG HOUSE posted:

???? ??????


I hadn't heard this before so thats something I guess. I believe the whole "FE" thing has bitten them way too hard in the rear end to go exactly that route again but for what its worth I remember the older titan cooler cost $80 to make which was like a margin rear end wrecker. But that was soooooo long ago I dont remember where I read that

The manufacture is cheap, the r&d going into the cards is expensive, which is why or costs are so high.

FE wouldn't be so bad if we saw less of those from aftermarket companies now that they've shipped for a few weeks.

Phuzun
Jul 4, 2007

spasticColon posted:

But I don't have a G-sync or a Freesync monitor. I do plan on getting a new monitor soon but it's still only going to be 1080p like my current monitor but with the refresh rate bumped up to 120Hz. But will a 1060 or RX480 cut the mustard for 1080p 120Hz gaming?

There are some newer games that aren't able to maintain 120 on the GTX1080 at 1080p (maxed). Either should be fine if you are okay with lowering details.

Phuzun
Jul 4, 2007

THE DOG HOUSE posted:

:wtc: look at that loving thing

Sad (or ironic) that it doesn't overclock any better than other 1080s. Certainly not worth the cost unless this is a going in a system that the owner won't manually oc and price isn't an issue.

Phuzun
Jul 4, 2007

Yes, needs to be put in the game code. Nothing supported yet.

Phuzun
Jul 4, 2007

Deuce posted:

They just need to add pre filled extra radiators with the QDC and there's a golden age of custom water loops available for even novice builders.

I might go self-built anyway for cosmetics plus the fun and learning experience, though.

I've had a custom loop that has been upgraded and changed through the past decade. I'm not going to even argue the cost, cause air has it beat without a doubt. If you are willing to spend the time to plan out your build and not rush it, I doubt you'll have major issues. Fit install and cut tubes, remove everything and fill/bleed/leak check, then install for use.

Right now I run 6 Enermax 14db fans and a quiet 200mm side panel fan. 240mm and 360mm fan radiators with quick disconnects between each component in the loop. It is very quiet at this point, and doing a single component change is painless. The days prior to quick disconnects meant pulling the entire setup out, to avoid leaking on your system, so I'd recommend them if in budget.

Bare minimum would be a block (or multiple), pump, radiator, reservoir, something to kill algae like a silver plug or coil, tubes, enough tube fittings for everything, and fans. Be mindful of the tube size your buying and pick it on all parts that need it.

Sorry I don't know of any guide links, don't really need em.

Phuzun
Jul 4, 2007

This generation of Nvidia doesn't really seem to benefit from the fancier editions. Best to look at the cooler to see how loud it is as they all seem to hit within a 100mhz of each other. Plus with the way boosting works now, it is hard to pin point the actual maximum.

Phuzun
Jul 4, 2007

twxabfn posted:

My new TV does 1080p@120Hz and I've seen a couple occasions where my 1070 will drop into the 90s, but in that case it's not actually noticeable during gameplay - I can only tell if I actually have the frame counter up on the screen. I tried DOOM at 4k for a few minutes (sitting 10.5' from a 70" TV) and didn't notice any difference other than 1/3 the FPS. I might try 4k in games with 60FPS locks (EDF 4.1, if I ever play FO4 again), but if I have to turn down 4k settings to hit 60 I'll probably just stick with 1080p.

Seeing the additional 4k detail becomes harder with distance, very quickly. 120hz is noticeable at any distance and exactly why I prefer that in my TV as well.

Phuzun
Jul 4, 2007

As much as you can afford, then use DSR to make it look really good. Without getting into that, yeah, 1070 is likely good enough. My 1080 is on a TV that takes 120hz at 1080p, and plays most at 120fps with max detail. I imagine a 1070 will maintain 60 in everything. Plus it saves some for games that you'll likely want to buy.

Phuzun
Jul 4, 2007

SinineSiil posted:

Nvidia GFE driver installer chooses express install even though I clicked custom. I've noticed it once before but then I thought I just misclicked.

Really want a version that just has Shadowplay, no game optimizing or driver updates. I always try to hang back on drivers, to make sure it doesn't break folding, and GFE updates always seem to introduce problems vs manual downloads.

Phuzun
Jul 4, 2007

kuroiXiru posted:

Well, you can just use GFE, turn off auto updates, and ignore optimizations.

Lots of unnecessary stuff still running and causing issues. Latest one was breaking Xbox one wireless adapter due to one of the GFE services.

Adbot
ADBOT LOVES YOU

Phuzun
Jul 4, 2007

The Science Goy posted:

Driver chat:
Since a ton of people are upgrading their hardware due to 1080Ti release and mid-tier price drops, what is the generally accepted best practice for updating drivers?




...just adding to the thread discussion with a relevant question... it's totally unrelated to the fact that I have a 1080Ti getting delivered Monday and haven't done this for a while...


On that note, I don't play mainstream AAA games (just racing sims like iRacing/Automobilista/rFactor 2) and I haven't updated my 970 drivers for months because it sucks to lose a bunch of iRating to hardware issues. I'm running some benchmarks with my current setup to quantify my $700 purchase, and I'm curious what kind of framerate differences I'll see with the latest 970 drivers vs. my old ones. I know AAA titles can see gains from the driver optimization stuff, but these games simulations have a much smaller player base so they don't get the same treatment.

Usually you can get away with installing over the existing and not using the clean install option (custom if you don't need/want 3d vision drivers). For new hardware, I'd uninstall everything from Windows control panel and use the clean install option. Same thing if you end up with issues. If you don't run custom settings or resolutions through the Nvidia control panel, there isn't a huge loss in using clean install all the time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply