|
Paul MaudDib posted:For legitimate games, you can set up a certificate authority system. NVIDIA has the root cert, you issue a sub-CA to each studio, when they release a binary they sign it and the driver verifies the chain of signatures before allowing HDCP to be disabled. Which would prevent most of the problems with compatibility/expense - the only real problem is that if you are an aspiring game-dev you will need to buy a HDCP-compatible display until you're legit enough to get a cert. Seriously dude, what is it with you and this loving obsession with HDCP? It's not even how data leaves the card when you're using it for compute. You could take the connector off entirely and it would make no difference. You are not listening to the people telling you that the way you get data off the card in these cases is internally, over the PCI bus, without going anywhere near HDMI, and no you can't just disable that without breaking all game engines. And no, loving LOL NVidia is not going to start shipping drivers that only work with games whose publishers have a certificate issued by NVidia.
|
# ? Jan 19, 2018 13:11 |
|
|
# ? Apr 29, 2024 02:03 |
|
I think paul just really wants amd to sell better than nvidia through any possible means necessary, that's the only explanation I have for this derail
|
# ? Jan 19, 2018 13:15 |
|
If I were to design separate cards for gaming and mining, here is how I would do it: Gaming card: Fast GPU merely decent RAM Good selection of display outputs Big quiet open air fans Mining card: Not necessarily best GPU, ideally get nVidia to provide binned chips with the best shaders for mining but maybe the other logic is gimped or it just generally doesn't overclock well Great RAM, factory overclocked Just an HDMI port, maybe DisplayPort too Loud as hell squirrel cage fan Whole card only takes up a single slot Provide linking extenders to extend the fan further out so you can slot up to 8 of the suckers right next to each other
|
# ? Jan 19, 2018 14:50 |
|
axeil posted:Yeah, that is what had me concerned. That and that I actually really like the RX480, although it's a bit hot/loud as it has that stock cooler on it. I feel like going back to the R9 380 is a big downgrade and at the idea of selling an RX480 and then trying to buy a 580. This is from 2 pages back but you might try undervolting the RX480. I have a 4G 470 ($180 back before this outrageous pricing began) and dropping the voltages on the top two or three frequency bands made a huge difference. At the top frequency (1270 MHz on my card) going from 1150 mV to 1100mV seems to help with heat, temperatures, and noise.
|
# ? Jan 19, 2018 15:08 |
Rastor posted:If I were to design separate cards for gaming and mining, here is how I would do it: None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up. There is no solution. This is just something we will have to live with for the foreseeable future.
|
|
# ? Jan 19, 2018 16:07 |
|
AVeryLargeRadish posted:None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up. I partly addressed that aspect of it in another post: Rastor posted:If I were a GPU manufacturer I would publish a store app in Steam which allowed you to purchase one (1) GPU upgrade for immediate shipping at MSRP if your system hardware showed you were on a gaming PC that needed an upgrade. In my imaginary scenario I would ensure enough gaming cards were produced to allow sales through the app for upgrading gamers. Remaining available supply would be directed to the mining cards, which I would sell at auction. If demand were extreme I would only sell them in bundles of 4 or 8 at a time. My marketing would emphasize how unsuitable my gaming cards are for mining, showing really terrible hashes per watt but great frames per second.
|
# ? Jan 19, 2018 16:14 |
|
AVeryLargeRadish posted:None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up. There *are* solutions. There are just no *good* solutions.
|
# ? Jan 19, 2018 16:20 |
|
AVeryLargeRadish posted:There is no solution. This is just something we will have to live with for the foreseeable future. If you're going to pin your hopes on anything, it's the post-crash purge where every cryptominer tries to pay off that last $400 electricity bill by cashing out 480's for next to nothing in a suddenly supply saturated market.
|
# ? Jan 19, 2018 16:48 |
|
Naffer posted:This is from 2 pages back but you might try undervolting the RX480. I have a 4G 470 ($180 back before this outrageous pricing began) and dropping the voltages on the top two or three frequency bands made a huge difference. At the top frequency (1270 MHz on my card) going from 1150 mV to 1100mV seems to help with heat, temperatures, and noise. Is there any guide to doing that? I toyed around with WattMan back right when it came out and I either did it wrong or my card is unlucky because I got all kinds of crashes.
|
# ? Jan 19, 2018 17:42 |
|
axeil posted:Is there any guide to doing that? I toyed around with WattMan back right when it came out and I either did it wrong or my card is unlucky because I got all kinds of crashes. There are probably decent guides on the internet for them, but what I found is that many people oversold how much they could drop the voltage and keep a stable card. Basically switch the voltage to manual (marked A) and take down the voltages that show up to the right. Don't mess with the voltages on the low frequencies. Concentrate on the ones in the top 3 bands (B, C and D). Those are the voltages that the card uses when it is running a high speed. My particular brand of card uses 1137, 1150, and 1150mV for those last three voltages. Drop them gently, maybe 20 mV at a time then play a demanding game or run a benchmark for a while to see if the game runs fine. If you keep doing that eventually the game will crash and you'll want to edge the voltages back up. Say it crashes at 1085 mV, well you might want to use 1100 mV even if it was stable at 1090 mV just to have some buffer. On my card I had crashes using : 1080 1085 1090 I didn't have crashes using: 1080 1090 1095 So I use: 1095 1100 1105 If your cooling is good and your card stays pegged at the top frequency it will only really be using the top number for voltage. As a result, you might miss if your undervolt on the lower frequency bands is making the card unstable infrequently, so I'd be conservative with all but the top band. Naffer fucked around with this message at 17:58 on Jan 19, 2018 |
# ? Jan 19, 2018 17:54 |
|
DrDork posted:I'd also give it roughly a week for someone to figure out how to spoof a miner app to match the signature of a legitimate game to get around the signing. It'd be a huge cat-and-mouse game for no real practical gain on NVidia's end. Paul MaudDib posted:They designed it with a lot of end-of-life parts (possibly as a way to clear stock of near-obsolete parts with a limited-production product) and blew though the entire available stockpile in no time flat. feedmegin posted:Seriously dude, what is it with you and this loving obsession with HDCP? It's not even how data leaves the card when you're using it for compute. You could take the connector off entirely and it would make no difference. You are not listening to the people telling you that the way you get data off the card in these cases is internally, over the PCI bus, without going anywhere near HDMI, and no you can't just disable that without breaking all game engines. And no, loving LOL NVidia is not going to start shipping drivers that only work with games whose publishers have a certificate issued by NVidia. Paul MaudDib posted:I have never said any of this is a good idea (in fact, explicitly the opposite, repeatedly). It was a narrow technical question about "explain how could you stop miners from using gaming GPUs". He's not advocating this, just explaining a technical means by which a GPU vendor could restrict the compute utility of a "gaming" GPU by extending the use of existing DRM technology. We were discussing using graphics shaders to do computation, thus HDCP is relevant because it's the technical means already in use to prevent untrusted software from getting data back to the CPU. It would definitely gently caress over a lot of older games and hobbyist developers, but it could be done if they had the desire to. I think we all agree they do not.
|
# ? Jan 19, 2018 20:05 |
|
wolrah posted:We were discussing using graphics shaders to do computation, thus HDCP is relevant because it's the technical means already in use to prevent untrusted software from getting data back to the CPU. It does not do this, though. It stops things physically connected to the HDMI port displaying unauthorised content. Thats why its part of HDMI. The CPU can read it just fine.
|
# ? Jan 19, 2018 20:51 |
|
Yeah, I mean, ShadowPlay and other types of streaming/recording software are A Thing, so...
|
# ? Jan 19, 2018 20:59 |
|
German article: https://www.computerbase.de/2018-01/nvidia-geforce-gtx-verkauf-an-spieler/ "To ensure that GeForce gamers continue to have good GeForce graphics card availability in the current situation, we recommend that our trading partners make the appropriate arrangements to meet gamers' needs as usual. " Not clear what this is supposed to mean other than limiting orders to 2 per customer
|
# ? Jan 19, 2018 23:51 |
|
i think thats saying "we aint doin poo poo, not our problem"
|
# ? Jan 19, 2018 23:54 |
|
ZobarStyl posted:Precisely. There's no method that can make a highly parallel calculation engine really good at this kind of math but godawful at this kind of nearly identical math. nV and AMD have invested far too much effort into making these things as fast as they can, and now we're asking them to throw a precisely designed wrench into the gears that fucks up one mode but not the other? It's not happening.
|
# ? Jan 20, 2018 00:15 |
|
Paul MaudDib posted:The root-of-trust already exists on newer processors, this is why you can't run Netflix 4K on anything except a Kaby Lake or newer or Pascal or newer. Now I know why my 680GTX plays youtube at 4K just fine, while Netflix looks awful via PC, yet plays in glorious 4K on my smart TV (which I also use as my PC monitor) Netflix app.
|
# ? Jan 20, 2018 00:16 |
|
Naffer posted:There are probably decent guides on the internet for them, but what I found is that many people oversold how much they could drop the voltage and keep a stable card. This is extremely helpful, thank you!
|
# ? Jan 20, 2018 00:38 |
|
To what extent are AMD and Nvidia able to control the manufacturing throughout of GPUs? I know silicon manufacturing has a really long lead time, so do they have to tell TSMC/GloFo months in advance "we're going to need X chips every month for 18 months" and that's all they get? Or is it more like, once the appropriate tooling is in place they can ask for more or less in the middle of the production run? I feel like the answer to these questions is going to determine how available the next gen GPUs will be. If it's reasonably easy to move production up and down with crypto demand then maybe they'll start high and lower it if the bubble bursts. If not, they'll probably choose to be conservative and not increase production to match demand.
|
# ? Jan 20, 2018 01:52 |
|
It's hard to really know for 100% certain, but once the tooling and such is functional and it's producing at an acceptable rate, scaling up/down usually seems to have a lead time of a month or two--mid-course production adjustments are certainly an option, to a point. A more critical question may be how the contracts are written, either locking them in to producing a minimum of X a month (I know AMD has poo poo like that, unsure about NVidia) regardless of market conditions, or allowing TSMC to raise run prices if NVidia comes back mid-stream asking for substantial increases that might disrupt TSMC's overall production plans. Availability of Ampere will, at least for the first few months, likely depend mostly on production efficiency: if TSMC is on the ball and gets everything working quickly, availability should be moderately decent (plus or minus buttcoin miner impacts). If TSMC struggles to produce the chips in volume like GloFlo has from time to time, then availability will likely be poo poo regardless of buttcoiners.
|
# ? Jan 20, 2018 02:19 |
VostokProgram posted:To what extent are AMD and Nvidia able to control the manufacturing throughout of GPUs? I know silicon manufacturing has a really long lead time, so do they have to tell TSMC/GloFo months in advance "we're going to need X chips every month for 18 months" and that's all they get? Or is it more like, once the appropriate tooling is in place they can ask for more or less in the middle of the production run? The current foundries are producing at maximum capacity right now, so they can't increase production without building new foundries. The cost of building a foundry varies but you are looking at anywhere from 1-4 billion dollars with up to 10 billion for the really large and expensive ones. These foundries take years to build, so if they wanted to increase capacity to deal with crypto demand they would be looking at a 3-4 year lead time. They probably won't do that since crypto is unstable and it would be disastrous to have a foundry half built and then see demand fall through the floor. They will only build new foundries when they are sure that there will be enough demand to keep most of their manufacturing capacity running most of the time.
|
|
# ? Jan 20, 2018 02:49 |
|
Just had an interesting realization: with the 1080 Ti at $850 or above, and given the problems of SLI and the value retention, the Titan Xp is definitely a better deal than SLI 1080 Tis, and it might even be a better long-term buy than a single 1080 Ti edit: although you could mine on the spare 1080 Ti if your game didn't support SLI, and a pair of 1080 Tis would definitely mine faster when you weren't gaming Paul MaudDib fucked around with this message at 03:06 on Jan 20, 2018 |
# ? Jan 20, 2018 02:55 |
|
i'm lolling at the fact that my home ML server (4x 1080 Ti) which I built 3 months ago would be 2 grand more expensive if i were to build it today
|
# ? Jan 20, 2018 02:58 |
|
And if you'd been mining instead of wasting cycles on ML, it could have paid for itself by now!
|
# ? Jan 20, 2018 03:03 |
|
i've won the price of it on kaggle over the same time period so not really
|
# ? Jan 20, 2018 03:05 |
|
whats a 'home ML server'? seems serious
|
# ? Jan 20, 2018 03:09 |
|
redeyes posted:whats a 'home ML server'? seems serious I read home ML server and Bitcoin on the same page and I immediatley think money laundering.
|
# ? Jan 20, 2018 03:10 |
|
Machine Learning
|
# ? Jan 20, 2018 03:11 |
|
shrike82 posted:i've won the price of it on kaggle over the same time period so not really Nah, you had to actually learn something and do work, instead of posting smugly on /r/bitcoin. Who's the loser now?
|
# ? Jan 20, 2018 03:13 |
|
i've been playing around with ML frameworks (pytorch mainly at this point) as a hobby over the past year. started by just using a home gaming PC (Ryzen + 1x 1080 Ti) i then started working on Kaggle competitions (ML competitions platform) and decided to build a full-blown ML machine. It's currently a ThreadRipper 1950X + 64GB ram + 4x 1080 Ti. One of the few side benefits of the Bitcoin boom is you can buy nice and cheap aluminium open racks that hold multiple GPUs securely. the GPUs run at 70C while it's crunching numbers. and the thing about Kaggle is that while the non-Deep Learning competitions are brutal, the image-related competitions are still easy to do well in especially if you have a professional rack. i wouldn't be surprised if I make 20-30 grand this year in prizes from it as a hobby that i work on over the weekend.
|
# ? Jan 20, 2018 03:15 |
|
edit: they also have a new 1070 Ti for $920 and a new 1060 6 GB for $650, and a few 1050 Tis for $330-350. Paul MaudDib fucked around with this message at 04:40 on Jan 20, 2018 |
# ? Jan 20, 2018 04:36 |
|
Miners have more than doubled the retail prices of GPUs. What the gently caress.
|
# ? Jan 20, 2018 05:30 |
|
The fun part is it would still be semi-challenging to get back the $680 I paid for my MSI 1080 Armor near launch because I am bad at timing GPU purchases
|
# ? Jan 20, 2018 05:34 |
|
I spent like $430 for my 1070. I thought that was kinda high at the time too. gently caress
|
# ? Jan 20, 2018 05:43 |
|
I spent 640 on my 1080 around launch and based on all historical videocard prices I was ready to lose about half of that in 18 months.
|
# ? Jan 20, 2018 05:55 |
|
havenwaters posted:I spent like $430 for my 1070. I thought that was kinda high at the time too. gently caress Same, if getting a replacement wasn't so dicey, I'd sell and upgrade.
|
# ? Jan 20, 2018 06:55 |
|
feedmegin posted:It does not do this, though. It stops things physically connected to the HDMI port displaying unauthorised content. Thats why its part of HDMI. The CPU can read it just fine. Please proceed governor. Show me a method where I can read out a HDCP 2.2 framebuffer from another process. Should be simple right? DrDork posted:Yeah, I mean, ShadowPlay and other types of streaming/recording software are A Thing, so... Anything ShadowPlay is streaming is almost certainly recognized by Nvidia Experience, and could be recognized more strictly if necessary (signatures/etc). When signatures of a legit game are present, DRM could be disabled. Ironically, this is the exact opposite of how DRM works right now. And yes, at a certain degree of restriction everything starts breaking and you play it The Way It's Meant To Be Played™ or not at all. Can you throw up a huge roadblock that is effectively impractical to break? Yeah, the counterpoint raised earlier was "what if miners bought a HDCP 2.2-compliant monitor per GPU and pointed a webcam at it, then monitored it for hash checks and then modified their hash algos to disregard low-order bit errors". That's way harder and less scalable than "run a CUDA app", and now we can start talking about watermarking and all kinds of other poo poo. Can the NVIDIA drivers also figure out that a QR code is being rendered to the screen and block that? Yeah, probably. Or, miners could just buy AMD products instead. Would people be pissed as gently caress about a gratuitous display of DRM capabilities? Yeah, probably. And rightly so. Paul MaudDib fucked around with this message at 07:35 on Jan 20, 2018 |
# ? Jan 20, 2018 07:08 |
|
Paul MaudDib posted:Please proceed governor. Show me a method where I can read out a HDCP 2.2 framebuffer from another process. Should be simple right? I can't imagine drivers could be doing this kind of pattern recognition on frames without lowering performance for everything you do, though. Also, what am I missing here? quote:HDCP support is also only over the wire, not on your device. A common misconception is that DRM means that the pixel frames coming from your video decoder are encrypted. Not so: all content is completely unencrypted locally, with encryption only occurring at the very last step before the stream of pixels becomes a stream of physical electrons on a wire.
|
# ? Jan 20, 2018 07:35 |
|
Absurd Alhazred posted:I can't imagine drivers could be doing this kind of pattern recognition on frames without lowering performance for everything you do, though. That would be an interesting test case, but GPUs have a lot of compute power to spare by design. If the final framebuffer is still paged in, or you can come up with an online analysis method, it really takes almost no time to crush it and see if there's a QR code there. Compute time is dirt cheap on GPUs, effectively free vs non-cached/global-memory accesses (roughly O(100x) faster in my experience). My speculation would be that you could pick up on the QR code's alignment marks with a frequency transform, ala the Eurion Constellation. This watermark is resistant to translation/rotation/scaling/etc. You only have to pick it up with high probability, i.e. you can trigger a recheck if it's questionable. You just wouldn't want many false-negatives. shrike82, want to back me up or crush my theory here? I suspect the difference you're missing is that you could access the framebuffer from inside the entitled process but not outside, which is the critical difference between feedmegin's and my contention. Netflix is allowed to access Netflix's content, download-netflix.exe is not (much like with an entitled gaming process with occlusion queries/etc). Random processes are not allowed to tap into HDCP, by process VMEM protection if nothing else. Process separation is potentially very weak on GPU drivers, but in this case I'm betting there are additional checks/protections/etc. Certainly on Pascal-generation GPUs with the enhanced protection that Netflix wanted. I contend that this cross-process access is not possible on HDCP 2.2, and would like to see proof otherwise. e: god I can't believe I'm doing all this consulting for free, hire me already Jen-Hsun Paul MaudDib fucked around with this message at 08:11 on Jan 20, 2018 |
# ? Jan 20, 2018 07:41 |
|
|
# ? Apr 29, 2024 02:03 |
|
lol, i don't get the point of this argument. setting aside whether it's technically possible, paul started the discussion by saying that nvidia isn't going to do it. so it's a mutual jack-off session? please count me out
|
# ? Jan 20, 2018 10:05 |