|
Workstation IGPs would be the ones in the Quadros and Fire series GPU market for folks doing CAD primarily. It's just that there's no such thing as a "workstation"-class IGP at this point.
|
# ¿ Jan 10, 2013 23:29 |
|
|
# ¿ Apr 27, 2024 02:06 |
|
Nondescript Van posted:That is the exact opposite of a phantom.
|
# ¿ Jun 2, 2013 04:46 |
|
Another consideration is that even more powerful PhysX capabilities doesn't necessarily provide better visuals in that most games that do support it don't have all the crazy options we do with primary graphics. So the optimal coprocessor would be lowest possible idle power with sufficient power to handle PhysX for the life of that system. Otherwise, unless power is really expensive, I could suggest just keeping old cards around instead of selling them and buying something cheaper every upgrade cycle, that just sounds silly.
|
# ¿ Jun 15, 2013 22:13 |
|
It's a good offensive and defensive strategy in that you can expand potential markets without doing lame channel sales agreements or even making more fab contracts and at the same time if they're having problems getting good profits off of existing IP, just licensing it can shore up margins to help out. It'll be maybe a year or two before we see anything materialize out of this though probably, but I'd like to be wrong about it and find out that there were things already in the works that just made the licensing official policy for everyone.
|
# ¿ Jun 19, 2013 14:52 |
|
Speaking of quiet, I'm on the last-gen train with a "new" GTX 680 but compared to my previous EVGA GTX 560 I fried stupidly, this GPU is so much quieter at load it's not even funny. That was my previous complaint by far and a substantial factor in why I haven't considered high-end cards, but the performance gaps leading to a card being able to run well without breaking a sweat does make sense as to why the fans wouldn't need to work much at all. I can't really imagine how a GTX 780 could be even quieter is the thing given that the CPU fan is probably louder now, but perhaps the lack of airflow (and therefore cooling) in a supercompact case like a mini ITX case would be more impacted than with my micro ATX PC-A04 Lian Li case. The best part is that the power consumption is as good or better than the GTX 560 I had for a while with far superior settings, so the only downside is the up-front cost. Another unexpected pro is the idle power consumption - it's within 3-4w of an ATI 5450 which idled the whole system around 51w on a 520w PSU. A Kill-A-Watt reading of 54w when sitting at a desktop is not what I expected given that most reviews had the idle power consumption of their systems at least in the triple digits. Unfortunately, I don't think I can really get on the cooler = faster train for the current-gen cards because I plan on building something vaguely close to a new Mac Pro in a vertical air pull configuration with a smaller case like this with limited airflow and cooling design that won't work well with ACX coolers, especially. I'm really curious if the PC industry will take some of Apple's cue and start thinking about integrating component coolers together but I don't think that'll happen for both cost and technical reasons. I'd love to be able to swap parts out and know I'm getting decent cooling in a tiny, yet quiet case that takes up basically no room. I mean, that's the goddamn holy grail of desktop computing, isn't it? Smallest package that'll let you upgrade stuff as needed while being cool, small, etc. Does anyone have any theoretical numbers on what kind of heat dissipation a cooling design like the new Mac Pro could achieve? If it can solidly cool a dual GTX 780 configuration it'll give enthusiasts something new to blow their money on, so there would be some financial incentive to go this route as at least a boutique industry.
|
# ¿ Jul 6, 2013 17:13 |
|
I think we can just fall back to the justification to just buy optimally and upgrade often to avoid getting bit hard one way or the other as "(Computers) Strange game. The only winning move is to not play. How about a nice game of chess?" I'm convinced a lot of the "enthusiast" high end products are basically for every other nerd to blow money on once, realize they got nothing meaningful out of it, and to move on afterward. I guess I sorta did the same thing except I bought a Powerbook G4... 4 months before Apple introduced the x86 Macbooks that spanked them on performance
|
# ¿ Jul 26, 2013 16:03 |
|
AMD has been known for shoddy tooling and APIs for developers compared to nVidia while being somewhat Ok about IP sharing / licensing while the opposite seems true for nVidia (see: Linus Torvalds' rants on nVidia).
|
# ¿ Aug 13, 2013 15:53 |
|
Athropos posted:I'm beginning to think that maybe my SLI 2GB 680s are not enough for the resolution I'm using (2560x1440). I need to run some tests and monitor VRAM usage before being certain but god damnit.
|
# ¿ Sep 9, 2013 01:12 |
|
For everyone else that had a problem after running nVidia updated drivers, I was able to get out of being forced into safe mode to do anything by uninstalling all drivers and nVidia and ATI software completely (including ATI - I had both ATI and nVidia drivers present briefly) and cold booting each time for each configuration until I had finally installed drivers. Granted, this may not be the proper solution, but I tried reinstalling drivers like everyone else and that didn't work whatsoever. I'm suspecting something is wrong with a number of users' configurations that nVidia didn't quite test for. It kind of pisses me off because I'm not exactly a crazy power user when it comes to my GPUs and I've stuck with mainline driver releases, so I shouldn't have had any problem at all. Maybe it has something to do with using a GTX 680 that I'm not aware of but seeing that people on even 460s getting similar problems, I'm just going to leave the judgment at "nVidia QA hosed up"
|
# ¿ Oct 23, 2013 15:39 |
|
I got a GTX 680 2GB card for like $250 from a goon when the GTX 7xx series came out and I saw how little value I'd get for an extra $100+. I had burned out my GTX 560 when I was about to sell it and was without any gaming-capable card for months. Totally glad I waited. I've gotten smarter about buying hardware after I got burned buying a Powerbook G4 about 4 months before the first x86 Macbook showed up.
|
# ¿ Nov 5, 2013 18:25 |
|
Has anyone tried the Accelero Hybrid II? I'd like to quiet down my GTX 680 at both idle and load and hopefully be able to use it later for a GTX 970 or even beyond.
|
# ¿ Jan 27, 2015 00:46 |
|
This debacle doesn't seem that different from the TLB, TSX, FDIV and various other CPU gently caress-ups historically by CPU makers and there'll likely be some drivers that patch it out or something. We've always had to do some crazy gymnastics to access memory at various points in computing history. Anyone remember emm386.exe? How about the bank switching modes on the Atari 2600? Any game developer that feels like releasing a patch to avoid this area is free to do so, but this may be possible to fix on nVidia's side with drivers that exclude the region from allocation unless developers specifically enable access to it on a GTX 970. People threatening to sue over this kind of asinine crap only lets the lawyers win. Never give in to team law.
|
# ¿ Jan 28, 2015 03:14 |
|
I wasn't planning on upgrading to a GTX 970 anytime soon but I'm so tempted to exploit all the rich kids around here that have more money than sense. Sure, they'll still be richer than me but I'll have a great card that's near the eBay prices of my GTX 680 for maybe $40 more despite the card being 2 generations old. It just means I'll hold an upgrade for another 3 years instead of 1.
|
# ¿ Jan 30, 2015 13:58 |
|
Megasabin posted:I actually don't know. I got the card recently, and I haven't been playing anything taxing on it. It was actually a christmas gift in anticipation for the Witcher III. If I have stuttering while trying to play the Witcher because of this issue I'm not going to be happy, so I'm trying to sort it out now.
|
# ¿ Feb 3, 2015 17:07 |
|
Darkpriest667 posted:Let's be honest to the folks here, the "telemetry" that they're going to be collecting isn't anything super personal or special. It's nothing secret that they wouldn't be able to get anyway. It's just more convenient.
|
# ¿ Feb 8, 2015 15:39 |
|
My GTX 680 has either magically developed coil whine after a year of whatever or its fan has started to need lubrication because even at idle speeds I hear a squeak or chirp every couple seconds and it's pretty drat irritating. Tried blowing dust out of the card and that hasn't even changed the sound. Has anyone had to deal with something like this and would it be worth taking the fan assembly off the card to clean and re-lubricate the fan? Think it's a sleeve fan so I'm not even sure how to / if I can lube those up.
|
# ¿ Feb 19, 2015 17:56 |
|
Truga posted:Yeah, I was gonna get 2 970s to run my 30", and now I don't think I will over this. 3.5gb might be just right for 1440p, but people with 2560x1600 say it can bog down in some games. I think I'll just wait for R300 and see what happens to 980 prices or if a r300 is a better deal.
|
# ¿ Feb 21, 2015 02:28 |
|
Zig-Zag posted:Newegg is giving away the witcher 3 when you buy a MSI gtx 970 100ME. I'm glad I waited because that's the game that I wanted a new gpu for.
|
# ¿ Mar 10, 2015 23:14 |
|
veedubfreak posted:The odds of actually hurting equipment in this day and age from static discharge is so small as to be almost negligible.
|
# ¿ Mar 12, 2015 20:28 |
|
I moved my GTX 680 from an old machine to a new one and now I got coil whine when I never had any before. It's definitely possible to get coil whine (or potentially remove it!) by changing PSUs, motherboards, etc. that can change the harmonics in the electrical system that would cause the resonance. Some people use VSync to avoid this, others might need to disable it (I run VSync to avoid having the fans run faster). I'll just live with it until Pascal GPUs are out though I think, too busy to bother playing anything coming out in 2015 anymore, so no point upgrading to even a 970.
|
# ¿ Apr 1, 2015 20:11 |
|
There's some larger ITX cases like the new Core X1 from Thermaltake that can take large graphics cards. The Raven RVZ01 had to make a compromise on space usage by opting for an SFX power supply which limits your PSU pool pretty significantly. The Core V1 I'm on, however, won't take the Gigabyte GTX 970 that I was looking at with 3x DisplayPort outputs and also happens to have plenty of heatsink clearance for the CPU while I don't get that much height with the RVZ01 in comparison. And because of that I'm going to be a bump on a log and not even upgrade to a GTX 970 while I sit around for 4k displays to drop significantly in cost (providing that Intel has its way) at about the time that Pascal is released.
|
# ¿ Apr 18, 2015 04:43 |
|
This is what I meant by using a SFX PSU limits your selection - you have to actually do a little bit of research and think through things a little bit when with just plain old ATX PSUs you have an incredible number of possibilities. I'm all for making powerful computers smaller, but if you're custom building stuff yourself having a limited selection of parts really hurts when something breaks.
|
# ¿ Apr 18, 2015 18:18 |
|
I don't buy video cards often and was wondering how much better exactly would the cooling performance be between, say, an ASUS GTX 970 and an eVGA one on ACX 2.0? I'm trying to keep a single card setup that will handle the output in non-gaming for 2 4K monitors and a 34" ultrawide and that will require a combo of 3 DisplayPort or HDMI outputs which none of the ASUS or MSI cards carry among GTX 970s. To further restrict options, I have a mini ITX case and the maximum card length is about 11.5" so Gigabyte cards are out. Otherwise, I'm looking at a stupid GTX 980 for better coolers and the output ports. I'm not sure when I'll go buy the other monitors but it would be within a year. Sounds like future proofing but gaming is not my priority. Screens and low noise and heat at idle are my priorities while being able to play something at 3440x1440 without questioning my video card.
|
# ¿ May 7, 2015 19:21 |
|
I'm just crossing my fingers that Witcher 3 at 3440x1440 performs just fine on a GTX 970. RAM and disk shouldn't be a problem at least, and with an i7-4790k I'm going to be slightly pissed if CPU is a factor.
|
# ¿ May 13, 2015 21:52 |
|
You would almost think that AMD is trying to run themselves right into the ground and through the ocean floor with the decisions they're making. It sucks to be a bit of an underperforming underdog but they're not even doing what half-head management consultants would suggest anymore. I suspect they really have conceded that the only people left buying their stuff are just Intel haters or completely uninformed "enthusiasts" that are being paid by their marketing folks to give passable reviews.
|
# ¿ May 27, 2015 17:04 |
|
Panty Saluter posted:They broke me again. This is on sale for 299 with Witcher and Batman, so I upgraded my wife's computer. She's currently on a 650 Ti which has been a fantastic little card but is getting a little long in the tooth. gently caress, I literally bought that same card two weeks ago for $340-ish off of Amazon and I really haven't had the time to actually use it, so I might as well have waited. I got the impression those codes would go away upon Witcher 3 release though and that drove me to buying then.
|
# ¿ May 28, 2015 17:51 |
|
Parker Lewis posted:I mostly game with my current 970 on a 1080p TV but occasionally play stuff on a 3440x1440 monitor (where the 970 has some trouble with Witcher 3 and GTA V) and I'm trying to figure out if I'm better off spending ~ $350 on a second 970 or ~ $400 to sell my 970 and buy a 980 Ti.
|
# ¿ Jun 1, 2015 12:45 |
|
GTX 970 is sweet spot for maybe up to 2560x1440 but I am somewhat regretting it for my 3440x1440 monitor because I can hardly even get 40 fps in Witcher 3 with medium settings. But having to go up to a 980 Ti for a few AAA titles I hardly play is really a waste of money for me. I run at 2560x1080 and that gets me 50 fps+ but it's kind of a little disappointing as someone that hasn't paid more than $200 for a video card basically ever.
|
# ¿ Jun 6, 2015 19:53 |
|
For a while when we were on VGA standard still and in the 90s, I remember the DACs and ADCs used by ATI were better than nVidia's cards by a few decibels maybe which at least meant less chance of aliasing artifacts and also better demonstrable color range for certain video pros (not relevant for gamers or anything). Can't remember the review I saw that rated video cards similar to how you'd measure signal response from speakers but that was interesting then. Funny enough, we do have a digital VGA standard now, but nobody implements it except maybe three vendors. And hard drive seek noise is audible on many laptops for me still I'll contend that until the mid 80s or so DACs were not that great / efficient until the modern 1 bit DAC was perfected (early 80s had some issues for a while). Hardly 50 years. Christ, early Motown Records didn't have stuff that would remotely compete with a fuckin' iPhone 3G's DAC or any of its ADCs then.
|
# ¿ Jul 3, 2015 12:51 |
|
Manifest posted:I need a new card and am on a roughly $200 budget.
|
# ¿ Jul 4, 2015 14:57 |
|
Zero VGS posted:Why do they block it... do they actually sell Nvidia Grid to anyone?
|
# ¿ Sep 15, 2015 19:13 |
|
Cingulate posted:I'm dabbling with CUDA a bit right now and considering asking for a few 1000€ to spend on a system with a GPU. I've noticed there seems to be this three-fold split in NVIDIA's offerings: GeForce is for gamers and the Titan costs 1000 or so, then you have Quadro which starts at like 2000 for something that seems, numbers wise, much worse than a Titan, and then you have Tesla, which is the price of a car. I'm wondering, how much am I short-selling our total efforts by considering a 12GB Titan over a Tesla? But there's more info on the justifications of why a Titan X is probably best overall and that if you're dipping your toes a GTX 970 will likely be fine http://timdettmers.com/2014/08/14/which-gpu-for-deep-learning/
|
# ¿ Nov 17, 2015 03:53 |
|
SwissArmyDruid posted:Not that hard to trick the Nvidia cards into thinking that they're not actually in a VM. I hear it only takes one or two things. And I'm one of those dozen people interested in this approach because I'd like to run some CUDA stuff in one VM running Linux and when not in use have the GPU switched to an HTPC VM that runs Windows (DRM BS for TV recording and such that's easier under Windows). Maybe I should just get separate GPUs or eat the cost of a Tesla but buying multiple $1k+ cards sucks for primarily home use.
|
# ¿ Dec 18, 2015 17:36 |
|
Oh.... KVM basically. I've definitely read about getting nVidia card passthrough to work via KVM but not under ESXi. According to this it's just two flags http://www.se7ensins.com/forums/threads/how-to-setup-a-gaming-virtual-machine-with-gpu-passthrough-qemu-kvm-libvirt-and-vfio.1371980/ but it may not apply under Linux as the guest OS. Makes no sense why Hyper-V matters under KVM as the hypervisor but evidently the VM detection logic triggers under Windows as the guest OS for sure.
|
# ¿ Dec 18, 2015 18:44 |
|
I'm a semi-serious GPGPU deep learning / machine learning enthusiast and I have zero freakin' idea wtf nVidia means by "deep learning operations" in the PX 2 announcement. What, matrix multiplications, hardware function differentiator, or convolution acceleration? All the press releases so far about PX 2 basically explain wtf a neural net is at best going into almost zero actual technical details. Anyone got any more information on what's going on in the architecture / solution? I'm looking at trying to speed up training for models that take forever to train (I am not a smart man and am using gargantuan sized layers for the moment) and kinda slightly don't care about the real-time video classification parts I suspect may be what the platform is aimed at.
|
# ¿ Jan 12, 2016 05:00 |
|
Paul MaudDib posted:Half-precision floats mostly. xthetenth posted:That people manage to make spec non-compliant cables boggles the mind.
|
# ¿ Jan 12, 2016 19:13 |
|
Ah, I thought CUDA already had half-precision since 16-bit precision shaders have been around forever and that whatever I was using in Theano and Keras wound up as 16-bit floats.
|
# ¿ Jan 13, 2016 01:14 |
|
I work on high resolution video processing using neural networks as a hobby. My entire architecture would change (for the better) if I could treat the GPU memory like it's uh... actually memory rather than a manually managed L2 cache in practice.
|
# ¿ Jan 23, 2016 20:44 |
|
That fuckup goes right up there with the guy several years ago that "delidded" his Intel Xtreme CPU or whatever but actually ripped the CPU clean in half along 2 layers or so.
|
# ¿ Mar 9, 2016 17:50 |
|
|
# ¿ Apr 27, 2024 02:06 |
|
30 Hz, on the other hand, is loving murder on your eyes even when staring at a screen full of text because the instant you scroll your eyes will mutiny against your eye sockets.
|
# ¿ Mar 29, 2016 17:37 |