Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
feedmegin
Jul 30, 2008

Paul MaudDib posted:

For legitimate games, you can set up a certificate authority system. NVIDIA has the root cert, you issue a sub-CA to each studio, when they release a binary they sign it and the driver verifies the chain of signatures before allowing HDCP to be disabled. Which would prevent most of the problems with compatibility/expense - the only real problem is that if you are an aspiring game-dev you will need to buy a HDCP-compatible display until you're legit enough to get a cert.

Seriously dude, what is it with you and this loving obsession with HDCP? It's not even how data leaves the card when you're using it for compute. You could take the connector off entirely and it would make no difference. You are not listening to the people telling you that the way you get data off the card in these cases is internally, over the PCI bus, without going anywhere near HDMI, and no you can't just disable that without breaking all game engines. And no, loving LOL NVidia is not going to start shipping drivers that only work with games whose publishers have a certificate issued by NVidia.

Adbot
ADBOT LOVES YOU

Truga
May 4, 2014
Lipstick Apathy
I think paul just really wants amd to sell better than nvidia through any possible means necessary, that's the only explanation I have for this derail

Rastor
Jun 2, 2001

If I were to design separate cards for gaming and mining, here is how I would do it:

Gaming card:
Fast GPU
merely decent RAM
Good selection of display outputs
Big quiet open air fans

Mining card:
Not necessarily best GPU, ideally get nVidia to provide binned chips with the best shaders for mining but maybe the other logic is gimped or it just generally doesn't overclock well
Great RAM, factory overclocked
Just an HDMI port, maybe DisplayPort too
Loud as hell squirrel cage fan
Whole card only takes up a single slot
Provide linking extenders to extend the fan further out so you can slot up to 8 of the suckers right next to each other

Naffer
Oct 26, 2004

Not a good chemist

axeil posted:

Yeah, that is what had me concerned. That and that I actually really like the RX480, although it's a bit hot/loud as it has that stock cooler on it. I feel like going back to the R9 380 is a big downgrade and :laffo: at the idea of selling an RX480 and then trying to buy a 580.

This is from 2 pages back but you might try undervolting the RX480. I have a 4G 470 ($180 back before this outrageous pricing began) and dropping the voltages on the top two or three frequency bands made a huge difference. At the top frequency (1270 MHz on my card) going from 1150 mV to 1100mV seems to help with heat, temperatures, and noise.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

Rastor posted:

If I were to design separate cards for gaming and mining, here is how I would do it:

Gaming card:
Fast GPU
merely decent RAM
Good selection of display outputs
Big quiet open air fans

Mining card:
Not necessarily best GPU, ideally get nVidia to provide binned chips with the best shaders for mining but maybe the other logic is gimped or it just generally doesn't overclock well
Great RAM, factory overclocked
Just an HDMI port, maybe DisplayPort too
Loud as hell squirrel cage fan
Whole card only takes up a single slot
Provide linking extenders to extend the fan further out so you can slot up to 8 of the suckers right next to each other

None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up.

There is no solution. This is just something we will have to live with for the foreseeable future.

Rastor
Jun 2, 2001

AVeryLargeRadish posted:

None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up.

There is no solution. This is just something we will have to live with for the foreseeable future.

I partly addressed that aspect of it in another post:

Rastor posted:

If I were a GPU manufacturer I would publish a store app in Steam which allowed you to purchase one (1) GPU upgrade for immediate shipping at MSRP if your system hardware showed you were on a gaming PC that needed an upgrade.

In my imaginary scenario I would ensure enough gaming cards were produced to allow sales through the app for upgrading gamers.

Remaining available supply would be directed to the mining cards, which I would sell at auction. If demand were extreme I would only sell them in bundles of 4 or 8 at a time.

My marketing would emphasize how unsuitable my gaming cards are for mining, showing really terrible hashes per watt but great frames per second.

SwissArmyDruid
Feb 14, 2014

by sebmojo

AVeryLargeRadish posted:

None of that matters. They would have to build more manufacturing capacity to satisfy both markets and the mining market is volatile and pollutes the gaming market with loads of low priced used stock when things go bust. Because mining has a boom/bust cycle there will inevitably be shortages and soaring prices during the boom from lack of supply and oversupply with rock bottom prices during busts, with the bust fueling another boom because the price of entry for mining goes down and confidence creeps back up.

There is no solution. This is just something we will have to live with for the foreseeable future.

There *are* solutions.

There are just no *good* solutions.

ZobarStyl
Oct 24, 2005

This isn't a war, it's a moider.

AVeryLargeRadish posted:

There is no solution. This is just something we will have to live with for the foreseeable future.
Precisely. There's no method that can make a highly parallel calculation engine really good at this kind of math but godawful at this kind of nearly identical math. nV and AMD have invested far too much effort into making these things as fast as they can, and now we're asking them to throw a precisely designed wrench into the gears that fucks up one mode but not the other? It's not happening.

If you're going to pin your hopes on anything, it's the post-crash purge where every cryptominer tries to pay off that last $400 electricity bill by cashing out 480's for next to nothing in a suddenly supply saturated market.

axeil
Feb 14, 2006

Naffer posted:

This is from 2 pages back but you might try undervolting the RX480. I have a 4G 470 ($180 back before this outrageous pricing began) and dropping the voltages on the top two or three frequency bands made a huge difference. At the top frequency (1270 MHz on my card) going from 1150 mV to 1100mV seems to help with heat, temperatures, and noise.

Is there any guide to doing that? I toyed around with WattMan back right when it came out and I either did it wrong or my card is unlucky because I got all kinds of crashes.

Naffer
Oct 26, 2004

Not a good chemist

axeil posted:

Is there any guide to doing that? I toyed around with WattMan back right when it came out and I either did it wrong or my card is unlucky because I got all kinds of crashes.

There are probably decent guides on the internet for them, but what I found is that many people oversold how much they could drop the voltage and keep a stable card.
Basically switch the voltage to manual (marked A) and take down the voltages that show up to the right. Don't mess with the voltages on the low frequencies. Concentrate on the ones in the top 3 bands (B, C and D). Those are the voltages that the card uses when it is running a high speed. My particular brand of card uses 1137, 1150, and 1150mV for those last three voltages. Drop them gently, maybe 20 mV at a time then play a demanding game or run a benchmark for a while to see if the game runs fine. If you keep doing that eventually the game will crash and you'll want to edge the voltages back up. Say it crashes at 1085 mV, well you might want to use 1100 mV even if it was stable at 1090 mV just to have some buffer.

On my card I had crashes using :
1080 1085 1090
I didn't have crashes using:
1080 1090 1095
So I use:
1095 1100 1105

If your cooling is good and your card stays pegged at the top frequency it will only really be using the top number for voltage. As a result, you might miss if your undervolt on the lower frequency bands is making the card unstable infrequently, so I'd be conservative with all but the top band.

Only registered members can see post attachments!

Naffer fucked around with this message at 17:58 on Jan 19, 2018

wolrah
May 8, 2006
what?

DrDork posted:

I'd also give it roughly a week for someone to figure out how to spoof a miner app to match the signature of a legitimate game to get around the signing. It'd be a huge cat-and-mouse game for no real practical gain on NVidia's end.
Any game supporting mods, pretty much.

Paul MaudDib posted:

They designed it with a lot of end-of-life parts (possibly as a way to clear stock of near-obsolete parts with a limited-production product) and blew though the entire available stockpile in no time flat.
Isn't the SNES Classic pretty much exactly the same internals, to the point that the same "jailbreak" hacks worked basically unmodified?


feedmegin posted:

Seriously dude, what is it with you and this loving obsession with HDCP? It's not even how data leaves the card when you're using it for compute. You could take the connector off entirely and it would make no difference. You are not listening to the people telling you that the way you get data off the card in these cases is internally, over the PCI bus, without going anywhere near HDMI, and no you can't just disable that without breaking all game engines. And no, loving LOL NVidia is not going to start shipping drivers that only work with games whose publishers have a certificate issued by NVidia.
He answered this less than a page ago.

Paul MaudDib posted:

I have never said any of this is a good idea (in fact, explicitly the opposite, repeatedly). It was a narrow technical question about "explain how could you stop miners from using gaming GPUs".

The answer is DRM, and I'm sure you can come up with some form of DRM that is appropriately permissive when the signal chain is secure or when the appropriate signatures/disabling keys are present. Sure, there will be edge cases that will break ("obscure DX9 titles which NVIDIA doesn't support/have binaries for and which use occlusion queries"), but it wouldn't be the first time old, unsupported games have broken in the history of computing.

He's not advocating this, just explaining a technical means by which a GPU vendor could restrict the compute utility of a "gaming" GPU by extending the use of existing DRM technology. We were discussing using graphics shaders to do computation, thus HDCP is relevant because it's the technical means already in use to prevent untrusted software from getting data back to the CPU.

It would definitely gently caress over a lot of older games and hobbyist developers, but it could be done if they had the desire to. I think we all agree they do not.

feedmegin
Jul 30, 2008

wolrah posted:

We were discussing using graphics shaders to do computation, thus HDCP is relevant because it's the technical means already in use to prevent untrusted software from getting data back to the CPU.

It does not do this, though. It stops things physically connected to the HDMI port displaying unauthorised content. Thats why its part of HDMI. The CPU can read it just fine.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, I mean, ShadowPlay and other types of streaming/recording software are A Thing, so...

Rastor
Jun 2, 2001

German article:

https://www.computerbase.de/2018-01/nvidia-geforce-gtx-verkauf-an-spieler/

"To ensure that GeForce gamers continue to have good GeForce graphics card availability in the current situation, we recommend that our trading partners make the appropriate arrangements to meet gamers' needs as usual. "

Not clear what this is supposed to mean other than limiting orders to 2 per customer

Cygni
Nov 12, 2005

raring to post

i think thats saying "we aint doin poo poo, not our problem"

mobby_6kl
Aug 9, 2009

by Fluffdaddy

ZobarStyl posted:

Precisely. There's no method that can make a highly parallel calculation engine really good at this kind of math but godawful at this kind of nearly identical math. nV and AMD have invested far too much effort into making these things as fast as they can, and now we're asking them to throw a precisely designed wrench into the gears that fucks up one mode but not the other? It's not happening.

If you're going to pin your hopes on anything, it's the post-crash purge where every cryptominer tries to pay off that last $400 electricity bill by cashing out 480's for next to nothing in a suddenly supply saturated market.
Fundamentally, it's still a demand/supply issue. Even if they do come up with a way to cripple mining, well now they need to manufacture mining cards if they want to still make money off miners. But that's very risky because coins can go down the toilet at any moment and they'd end up with warehouses full of mining cards. The best they could do now IMO would be to crank up manufacturing a little to ease the shortages at least a bit, but that's still risky especially with Ampere coming out shortly.

kindermord
Jun 5, 2003
ducks is chickens with swimmy toes

Paul MaudDib posted:

The root-of-trust already exists on newer processors, this is why you can't run Netflix 4K on anything except a Kaby Lake or newer or Pascal or newer.

Now I know why my 680GTX plays youtube at 4K just fine, while Netflix looks awful via PC, yet plays in glorious 4K on my smart TV (which I also use as my PC monitor) Netflix app.

axeil
Feb 14, 2006

Naffer posted:

There are probably decent guides on the internet for them, but what I found is that many people oversold how much they could drop the voltage and keep a stable card.
Basically switch the voltage to manual (marked A) and take down the voltages that show up to the right. Don't mess with the voltages on the low frequencies. Concentrate on the ones in the top 3 bands (B, C and D). Those are the voltages that the card uses when it is running a high speed. My particular brand of card uses 1137, 1150, and 1150mV for those last three voltages. Drop them gently, maybe 20 mV at a time then play a demanding game or run a benchmark for a while to see if the game runs fine. If you keep doing that eventually the game will crash and you'll want to edge the voltages back up. Say it crashes at 1085 mV, well you might want to use 1100 mV even if it was stable at 1090 mV just to have some buffer.

On my card I had crashes using :
1080 1085 1090
I didn't have crashes using:
1080 1090 1095
So I use:
1095 1100 1105

If your cooling is good and your card stays pegged at the top frequency it will only really be using the top number for voltage. As a result, you might miss if your undervolt on the lower frequency bands is making the card unstable infrequently, so I'd be conservative with all but the top band.



This is extremely helpful, thank you!

Yaoi Gagarin
Feb 20, 2014

To what extent are AMD and Nvidia able to control the manufacturing throughout of GPUs? I know silicon manufacturing has a really long lead time, so do they have to tell TSMC/GloFo months in advance "we're going to need X chips every month for 18 months" and that's all they get? Or is it more like, once the appropriate tooling is in place they can ask for more or less in the middle of the production run?

I feel like the answer to these questions is going to determine how available the next gen GPUs will be. If it's reasonably easy to move production up and down with crypto demand then maybe they'll start high and lower it if the bubble bursts. If not, they'll probably choose to be conservative and not increase production to match demand.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
It's hard to really know for 100% certain, but once the tooling and such is functional and it's producing at an acceptable rate, scaling up/down usually seems to have a lead time of a month or two--mid-course production adjustments are certainly an option, to a point. A more critical question may be how the contracts are written, either locking them in to producing a minimum of X a month (I know AMD has poo poo like that, unsure about NVidia) regardless of market conditions, or allowing TSMC to raise run prices if NVidia comes back mid-stream asking for substantial increases that might disrupt TSMC's overall production plans.

Availability of Ampere will, at least for the first few months, likely depend mostly on production efficiency: if TSMC is on the ball and gets everything working quickly, availability should be moderately decent (plus or minus buttcoin miner impacts). If TSMC struggles to produce the chips in volume like GloFlo has from time to time, then availability will likely be poo poo regardless of buttcoiners.

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

VostokProgram posted:

To what extent are AMD and Nvidia able to control the manufacturing throughout of GPUs? I know silicon manufacturing has a really long lead time, so do they have to tell TSMC/GloFo months in advance "we're going to need X chips every month for 18 months" and that's all they get? Or is it more like, once the appropriate tooling is in place they can ask for more or less in the middle of the production run?

I feel like the answer to these questions is going to determine how available the next gen GPUs will be. If it's reasonably easy to move production up and down with crypto demand then maybe they'll start high and lower it if the bubble bursts. If not, they'll probably choose to be conservative and not increase production to match demand.

The current foundries are producing at maximum capacity right now, so they can't increase production without building new foundries. The cost of building a foundry varies but you are looking at anywhere from 1-4 billion dollars with up to 10 billion for the really large and expensive ones. These foundries take years to build, so if they wanted to increase capacity to deal with crypto demand they would be looking at a 3-4 year lead time. They probably won't do that since crypto is unstable and it would be disastrous to have a foundry half built and then see demand fall through the floor. They will only build new foundries when they are sure that there will be enough demand to keep most of their manufacturing capacity running most of the time.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Just had an interesting realization: with the 1080 Ti at $850 or above, and given the problems of SLI and the value retention, the Titan Xp is definitely a better deal than SLI 1080 Tis, and it might even be a better long-term buy than a single 1080 Ti :v:

edit: although you could mine on the spare 1080 Ti if your game didn't support SLI, and a pair of 1080 Tis would definitely mine faster when you weren't gaming

Paul MaudDib fucked around with this message at 03:06 on Jan 20, 2018

shrike82
Jun 11, 2005
Probation
Can't post for 4 hours!
i'm lolling at the fact that my home ML server (4x 1080 Ti) which I built 3 months ago would be 2 grand more expensive if i were to build it today

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
And if you'd been mining instead of wasting cycles on ML, it could have paid for itself by now!

shrike82
Jun 11, 2005
Probation
Can't post for 4 hours!
i've won the price of it on kaggle over the same time period so not really

redeyes
Sep 14, 2002

by Fluffdaddy
whats a 'home ML server'? seems serious

Zil
Jun 4, 2011

Satanically Summoned Citrus


redeyes posted:

whats a 'home ML server'? seems serious

I read home ML server and Bitcoin on the same page and I immediatley think money laundering.

craig588
Nov 19, 2005

by Nyc_Tattoo
Machine Learning

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

shrike82 posted:

i've won the price of it on kaggle over the same time period so not really

Nah, you had to actually learn something and do work, instead of posting smugly on /r/bitcoin. Who's the loser now?

shrike82
Jun 11, 2005
Probation
Can't post for 4 hours!
i've been playing around with ML frameworks (pytorch mainly at this point) as a hobby over the past year.
started by just using a home gaming PC (Ryzen + 1x 1080 Ti)

i then started working on Kaggle competitions (ML competitions platform) and decided to build a full-blown ML machine.
It's currently a ThreadRipper 1950X + 64GB ram + 4x 1080 Ti. One of the few side benefits of the Bitcoin boom is you can buy nice and cheap aluminium open racks that hold multiple GPUs securely. the GPUs run at 70C while it's crunching numbers.

and the thing about Kaggle is that while the non-Deep Learning competitions are brutal, the image-related competitions are still easy to do well in especially if you have a professional rack. i wouldn't be surprised if I make 20-30 grand this year in prizes from it as a hobby that i work on over the weekend.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
:sigh:



edit: they also have a new 1070 Ti for $920 and a new 1060 6 GB for $650, and a few 1050 Tis for $330-350.

Paul MaudDib fucked around with this message at 04:40 on Jan 20, 2018

Kazinsal
Dec 13, 2011



Miners have more than doubled the retail prices of GPUs.

What the gently caress.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The fun part is it would still be semi-challenging to get back the $680 I paid for my MSI 1080 Armor near launch because I am bad at timing GPU purchases :shrug:

MagusDraco
Nov 11, 2011

even speedwagon was trolled
I spent like $430 for my 1070. I thought that was kinda high at the time too. gently caress

craig588
Nov 19, 2005

by Nyc_Tattoo
I spent 640 on my 1080 around launch and based on all historical videocard prices I was ready to lose about half of that in 18 months.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


havenwaters posted:

I spent like $430 for my 1070. I thought that was kinda high at the time too. gently caress

Same, if getting a replacement wasn't so dicey, I'd sell and upgrade.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

It does not do this, though. It stops things physically connected to the HDMI port displaying unauthorised content. Thats why its part of HDMI. The CPU can read it just fine.

Please proceed governor. Show me a method where I can read out a HDCP 2.2 framebuffer from another process. Should be simple right?

DrDork posted:

Yeah, I mean, ShadowPlay and other types of streaming/recording software are A Thing, so...

Anything ShadowPlay is streaming is almost certainly recognized by Nvidia Experience, and could be recognized more strictly if necessary (signatures/etc). When signatures of a legit game are present, DRM could be disabled. Ironically, this is the exact opposite of how DRM works right now.

And yes, at a certain degree of restriction everything starts breaking and you play it The Way It's Meant To Be Played™ or not at all.

Can you throw up a huge roadblock that is effectively impractical to break? Yeah, the counterpoint raised earlier was "what if miners bought a HDCP 2.2-compliant monitor per GPU and pointed a webcam at it, then monitored it for hash checks and then modified their hash algos to disregard low-order bit errors". That's way harder and less scalable than "run a CUDA app", and now we can start talking about watermarking and all kinds of other poo poo.

Can the NVIDIA drivers also figure out that a QR code is being rendered to the screen and block that? Yeah, probably.

Or, miners could just buy AMD products instead.

Would people be pissed as gently caress about a gratuitous display of DRM capabilities? Yeah, probably. And rightly so.

Paul MaudDib fucked around with this message at 07:35 on Jan 20, 2018

Absurd Alhazred
Mar 27, 2010

by Athanatos

Paul MaudDib posted:

Please proceed governor. Show me a method where I can read out a HDCP 2.2 framebuffer from another process. Should be simple right?


Anything ShadowPlay is streaming is almost certainly recognized by Nvidia Experience, and could be recognized more strictly if necessary (signatures/etc). When signatures of a legit game are present, DRM could be disabled. Ironically, this is the exact opposite of how DRM works right now.

And yes, at a certain degree of restriction everything starts breaking and you play it The Way It's Meant To Be Played™ or not at all.

Can you throw up a huge roadblock? Yeah, the counterpoint raised earlier was "what if miners bought a HDCP monitor per GPU and pointed a webcam at it, then monitored it for hash checks and then modified their hash algos to disregard low-order bit errors". That's way harder than "run a CUDA app".

Can the NVIDIA drivers also figure out that a QR code is being rendered to the screen and block that? Yeah, probably.

Or, miners could just buy AMD products instead.

Would people be pissed as gently caress about a gratuitous display of DRM capabilities? Yeah, probably.

I can't imagine drivers could be doing this kind of pattern recognition on frames without lowering performance for everything you do, though.

Also, what am I missing here?

quote:

HDCP support is also only over the wire, not on your device. A common misconception is that DRM means that the pixel frames coming from your video decoder are encrypted. Not so: all content is completely unencrypted locally, with encryption only occurring at the very last step before the stream of pixels becomes a stream of physical electrons on a wire.

Technically speaking, this means that all framebuffers presented to DRM/KMS, are provided unencrypted; if GPU composition is involved, the buffers presented through OpenGL or Vulkan for composition are also unencrypted, as is the GPU output. These unencrypted buffers are placed on a plane, which is mixed into a single CRTC's unencrypted output by the display controller. Only once the final CRTC pixel stream makes it to the encoder stage (where it is transformed from pixel content into a stream of DisplayPort/HDMI signals) does the encryption occur. By this stage, the content is already unrecognisable, as it has been prepared for electrical transmission by 8/10b encoding, potentially cut into DisplayPort packets, and so on.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Absurd Alhazred posted:

I can't imagine drivers could be doing this kind of pattern recognition on frames without lowering performance for everything you do, though.

Also, what am I missing here?

That would be an interesting test case, but GPUs have a lot of compute power to spare by design. If the final framebuffer is still paged in, or you can come up with an online analysis method, it really takes almost no time to crush it and see if there's a QR code there. Compute time is dirt cheap on GPUs, effectively free vs non-cached/global-memory accesses (roughly O(100x) faster in my experience). My speculation would be that you could pick up on the QR code's alignment marks with a frequency transform, ala the Eurion Constellation. This watermark is resistant to translation/rotation/scaling/etc.

You only have to pick it up with high probability, i.e. you can trigger a recheck if it's questionable. You just wouldn't want many false-negatives. shrike82, want to back me up or crush my theory here?

I suspect the difference you're missing is that you could access the framebuffer from inside the entitled process but not outside, which is the critical difference between feedmegin's and my contention. Netflix is allowed to access Netflix's content, download-netflix.exe is not (much like with an entitled gaming process with occlusion queries/etc). Random processes are not allowed to tap into HDCP, by process VMEM protection if nothing else. Process separation is potentially very weak on GPU drivers, but in this case I'm betting there are additional checks/protections/etc. Certainly on Pascal-generation GPUs with the enhanced protection that Netflix wanted. I contend that this cross-process access is not possible on HDCP 2.2, and would like to see proof otherwise.

e: god I can't believe I'm doing all this consulting for free, hire me already Jen-Hsun :monocle:

Paul MaudDib fucked around with this message at 08:11 on Jan 20, 2018

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005
Probation
Can't post for 4 hours!
lol, i don't get the point of this argument. setting aside whether it's technically possible, paul started the discussion by saying that nvidia isn't going to do it.

so it's a mutual jack-off session? please count me out

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply