Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Prediction: this thread will end in a glorious blaze of redtext avatars

and possibly also a literal blaze as someone overdraws their wiring

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I also had really bad results with ccminer-alexis in general, it reports a lot higher than a pool actually sees it.

Paul MaudDib fucked around with this message at 07:16 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

Yeah I have multiple gaming rigs that are all mining and I'm gonna start playing on a PS4. I already was a weirdo with multiple rigs while being a casual gamer though so I might as well make some ROI.

Or you can turn down the intensity of your miners and play older games while you mine, I've seen some people doing that :lol:

I bought a GTX 1060 3 GB from MicroCenter for $150 (minus $5 coupon) so I can use my 1080 for mining (other coins, not Eth) while I'm playing older games. I play a lot of TF2, which doesn't even come close to saturating my 1060 let alone my 1080. GSync is great here.

I've been replaying Witcher 3 lately, that's more intense so I use the 1080 there.

When my system is idle, the extra 1060 can be ganged up with the 1080 for a ~40% improvement in hashrate on my gaming box.

Paul MaudDib fucked around with this message at 05:07 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

The Lord Bude posted:

I take it attempting this on a 780ti is a waste of time?

Look at benchmarks for the Radeon 7950/280 and extrapolate, that's roughly a similar card. You would probably make like $3-4 a day and spend $1-2 of that in electricity. I wouldn't do that while you are running the A/C unless that room is shut off so the heat can't escape into the rest of the house.

Reminder: in a datacenter you double the expected cost of power, because you need to air-condition that heat back out. Factor that into your costs if you are running A/C.

Fruit Chewy posted:

Edit: ALSO I've been running this random modified version of nicehash because it added dual mining eth+sia for nvidia cards (which isn't worth doing on my card it turns out) but it also happened to add 'ewbf' - another equihash miner - which is somehow like 15% faster than excavator on my card. Worth looking into if you're doing majority equihash on an nvidia card.

Be sure to watch what is actually reported in the pool/etc because some of the alternate miners don't actually work as well as they report they do (eg average 60m hashrate is half/less of what it says in the console). Not as much of an issue with Nicehash, but more of an issue when mining directly.

I don't trust NiceHash not to let some bitlord run a trojan kernel given its access to VRAM and the terrible loving memory segmentation in VRAM world. A zero-day in the driver is serious business given the kernel-mode access that device drivers have, and you are running a program that downloads arbitrary internet programs and gives them direct access to the NVIDIA drivers. It would be the Cryptolocker attack of the century, and you know everyone there has bitcoins.

Paul MaudDib fucked around with this message at 05:31 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kazinsal posted:

I haven't read much into the state of memory protection on GPUs but I imagine that it's as bad as early memory protection and segmentation on microcomputers, if not worse.

Consider the following: http://www.sciencedirect.com/science/article/pii/S1742287615000559

NVIDIA's drivers are a little better about not allowing super dumb poo poo but overall GPUs are designed around a "high performance computation" model. For example on non-ECC NVIDIA GPUs, hardware ties into the host in kernel mode for performance, memory is not zero'd between reallocation between programs, etc. And yes, in general it's like working on a microcontroller sometimes, boundaries and fault conditions that you expect in CPU world may or may not exist. This isn't a model built around an assumption of possible bad actors, it's close to the metal by design.

Would you really notice an OpenCL/CUDA thread quietly cryptolockering your files in the background, and stopping when it sees logging/tracing/mouse movement/etc? Most people aren't locked down that far.

Paul MaudDib fucked around with this message at 06:10 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

craig588 posted:

This is a good reminder not to do it on important systems. It also reminded me to make sure my important files are backed up, but for my game computers I feel alright about having to format them.

Malware is where the real money is, people running Nicehash would probably have some bitcoins to pay ransoms and not running mining on your own PC is the most direct way to free power.

The risk is that once infected a single system can spread the infection. It cryptos a share or drops another exploit on your fileserver, what now?

If you're going to do that poo poo you should absolutely lock it down to a VLAN so it can't touch anything outside.

Paul MaudDib fucked around with this message at 06:40 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Cinara posted:

What is Nicehash picking for you mostly? I am trying to figure out why my 1080ti is only doing about $8 a day right now, it seems to only want to pick Lbry and Lyra2REv2.

Is it ccminer-alexis? Disable it for all algorithms afaik. It falsely reports much higher than you actually get.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Dren posted:

What do you think of running the miner on a liveusb with no kernel drivers for pata, sata, and scsi controllers? I should probably vlan it too like you suggest.

Does NiceHash for Linux even exist? My beef is specifically with the "program that downloads arbitrary code from the internet and runs it on your GPU" part, I think the risk of the actual official Github repositories being trojans is extremely low. And based on what people are saying about "trying a new version of Nicehash with better optimizations" I might have misunderstood how NiceHash works (although it still strikes me as a quasi-shady program overall). I am just relatively paranoid about this and would prefer to run relatively trusted builds from source where possible rather than "Free Internet Money Program, Just Run Me!". I lived through the early days of the internet and don't trust that poo poo.

Craptacular's right though, the real solution here is having backups of things you care about, but just like using pirated games from the internet, cryptomining with NiceHash strikes me as a risky activity where you should be double-sure that your backups are all in order.

Whitest Russian posted:

Could you mine in a VM to eliminate the security concerns? Or would that slow your bitgainz too much?

I actually played with this the other night and ESXi won't let me pass my NVIDIA GPU through (this is apparently feature-locked to Quadros). I had VT-d enabled and some devices showed up as passable, including even the audio controllers on the GPUs, but not the GPUs themselves. It might work with AMD, and there was also a suggestion that you could use KVM (another hypervisor) with a particular flag which nullified the check NVIDIA was doing to be sure you aren't doing passthrough on non-Quadros.

I've been wanting to try this anyway to see if I could run my whole system in hypervisor so I could dual-boot and run Linux at the same time (my desktop has tons of cores/memory to spare). In theory you shouldn't see any performance hit from passing the GPU through, VT-d gives direct access to the PCIe devices at a CPU level, there is no "shim" needed.

To jump back to Dren's post, running a miner that you compiled yourself from source (takes literally 5 minutes), inside a VLAN segment to keep it away from the rest of your network, with no local access to your drives/network access to shares/etc, sufficiently satisfies my paranoia. In theory running inside a hypervisor makes some of those conditions easier to fulfill since you're not talking about unplugging drives or your gaming PC being unable to access your fileserver, etc. You just wouldn't pass them through to that particular VM.

You don't really need to boot from USB, as long as your system can't touch anything else then the worst-case scenario is you need to re-image. But if you wanted to gild the lily a bit you could make a custom loopback image that has all your stuff installed, and then either boot that from USB (mounted RO) or PXE boot. Then your base system image would truly be immutable. Again with the VLAN thing you could easily set up a network partition where all this stuff could live and potentially be automatically quarantined based on some kind of IP/mac address filter. But if you're putting this much thought into it then you're way beyond just "run the miner directly instead of through NiceHash".

I've been needing to play with PXE too, because I have an old Thinkpad that's too old to understand how to boot from USB, and the CD drive isn't doing too hot, but it does understand how to PXE boot. PXE basically looks in a specific place for a TFTP server (I want to say on the DHCP host) so you need to do some rearranging from the typical router config but again, VLANs might come in handy so your private network can work like normal and you have separate DHCP/TFTP running for the guest VLAN segment served from a VM or something.

Paul MaudDib fucked around with this message at 22:26 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Dren posted:

I haven't set up VT-D with pci-e passthrough, last I looked it seemed like kind of a pain in the rear end. My guess is that it doesn't fully address the security concerns but I don't know the architecture well enough to say.

"Secure enough". It's obviously not impossible for exploits to jump the sandbox and they do occasionally happen but you're now assuming a deliberate attack on yourself rather than just low-hanging fruit of scanning your network for shares, dropping cryptolocker onto anything writeable, and calling it a day.

Someone with a pile of zero-days is always going to get you if they really want, but some random nerd's PC probably isn't that juicy a target compared to say, Amazon. You just have to put up enough of a speedbump that quick-and-dirty mass attacks aren't going to get you. A locked-down VM on a separate VLAN segment is more effort than 99.99% of people are going to bother with.

(neither here nor there since nobody is really targeting nerds with sandbox-jumping malware attacks, but: there is an interesting argument that the blatantly obvious approach is actually an essential element of Nigerian Prince scams, it self-selects for targets who are too stupid to see the scam coming so the scammer doesn't waste time on people who are too smart to wire $10k to Africa just because an email told them to. I would argue there is probably a parallel here, anyone who is smart enough to have their mining running in locked-down VMs on a separate VLAN segment hopefully also has offline backups and is going to give you the middle finger if you try and cryptolocker them. Or in other words, the only additional people that deploying an advanced attack would get you are also extremely unlikely to actually pay.)

Paul MaudDib fucked around with this message at 21:01 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

cheese-cube posted:

What OS gets the best hash rates? Or is that a dumb question and do you just go bare metal.

Barring a specific problem it doesn't matter, everything performs roughly the same.

NVIDIA cards on Windows 10 have hosed-up Ethereum performance due to semi-recent changes in how the memory caching works. You either need to run an old driver, mine a different coin, or run Win7/Win8/Win8.1/Linux.

NVIDIA's Linux drivers are kinda funky and I was running into some trouble getting control over fans/power limits/etc. I would imagine that it would eventually be doable within another few evenings of tinkering and this should be a "do it once, write it down, problem solved forever" kind of situation. However apart from the lack of low-level GPU control, everything was running normally at the given defaults.

I think AMD's Windows drivers are clownshoes, I have tons of problems with GPUs losing their drivers and poo poo. GPU-Z and Afterburner will stop working and hashrates will go way down. Going into Device Manager and manually re-installing the device driver using "have disk" for one of the cards (automatically fans out to all identical cards as well) then restarting usually fixes it. Failing that running DDU and reinstalling the whole driver has always fixed it. This behavior doesn't seem normal and I'm wondering if my risers are occasionally disconnecting or something, but it just doesn't happen with the NVIDIA cards I have.

With Linux, I think AMD does have a similar problem where it's nowhere near as trivial to control power/fans/voltage as on Windows. However, AMD doesn't sign their VBIOS so in theory you can encode all this poo poo into the VBIOS and then the card will automatically set power limits/voltage/etc in any environment.

(Since we're on the topic of VBIOS, I have tried tweaking memory timings on my RX 480s but I've been unable to get any flashed VBIOS to work ever, across multiple cards. They just aren't recognized until I flash the original back.)

Running NVIDIA and AMD cards at the same time is possible under Windows but I don't think it is possible under Linux. Both the AMD and NVIDIA Linux drivers are heavily tied into the Xorg stack and you need to be running their appropriate driver in the Xorg session before you can run CUDA/OpenCL on them. Otherwise they don't show up to CUDA/OpenCL apps. I think it might be possible to run an Xorg session on a dummy output so the drivers are running, which could in theory let you have both at the same time. But I am pretty much at the limit of my Linux sysadmin ability here, Xorg is pretty black magic as far as I'm concerned.

Paul MaudDib fucked around with this message at 21:20 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Fauxtool posted:

I also would like to know what makes one good and which ones to buy. Im building a new gaming PC and I plan on tossing 2-3 of my older cards into my old pc and I want to do it right without burning down my home with a 4 year old mobo

By the way the ones I got were from "MinerParts.com" and I ordered through Amazon. All of the ones I got seem to be working apart from possibly one which disconnects every now and then (still working on this), so decent success rate. However, all the companies here are ordering from the same shady Chinese sources so I don't know if there are really any that are "better". You do probably want one of the later revisions, like Rev6 or 6C.

I would either go with the type that have a Molex input or the type that have a PCIe 6-pin input, and do not use any of the SATA power adapters that come with them. They are all the "overmolded" type, the plastic eventually softens and the pins inside short out and cause a fire - even under SSD-level power consumption let alone when hooked up to a GPU. You should under no circumstances ever use these for anything, they are literally a fire waiting to happen. Just throw them out.

Each GPU may draw up to 75W from the slot so that's your goal for the risers. Molex's connector is rated to up to 11A per pin, at 12V that's 132W. So do not do any situation which ends up with 2 risers drawing through a single Molex (eg no molex-to-molex Y-splitters).

Technically 6-pins also have a 75W limit but you are probably OK at up to 150W because in practice most connectors are specified for 8-pin usage anyway (the extra 2 pins in the 8-pin are sense pins, the 6-pin part carries all the current). So ideally no situations where more than 1 riser/GPU is plugged in to a single PCIe 6-pin and hard limit no more than 2 per 6-pin (i.e. a single Y-splitter).

There are also PCIe riser types that have a SATA power connector (on the board). I would give these a pass, SATA has 3 12V pins rated at 1.5A each, that gives you 54W which isn't enough. (and yet another reason not to use those Molex-to-SATA cables, even in the best case you would be overdrawing them...)

Paul MaudDib fucked around with this message at 22:07 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Fauxtool posted:

so on something like this the idea is to take the 6pin from the PSU and plug it directly into the pci-e riser?
https://www.amazon.com/Onvian-Minin...words=pci+riser

I think that's the safest overall if you have the 6-pins to spare, and you can probably even do a single Y-splitter safely on 6-pin. Running molex risers is also fine, just don't put a splitter on these. Keep SATA entirely out of this picture, including in cables/adapters.

It's all just a question of how many power connectors the card+riser need vs what your PSU gives you. Modular can be nice here since you can often mix-and-match the strings a little bit. For example my PSU has two "PERIPH" type sockets, instead of running 1 molex string + 1 SATA string I could run 2 molex strings.

Just avoid splitters/etc as much as possible and generally minimize the number of adapters/connectors you are using because each one is a place to heat up and cause a fire.

Paul MaudDib fucked around with this message at 22:19 on Jun 21, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

future ghost posted:

I think bitcoins and cryptocurrency is dumb as hell, and while this trend is clearly loving the market and screwing over everyone trying to build gaming machines, I'm really thrilled that I bought my old cards cheap and turned around and sold them for a 1:1 upgrade years later thanks to bitlords.

That said, to anyone buying old mining cards when this bubble pops, be on the lookout for cards with serial-based warranties. Both of the ex mining cards I bought, the 290 and 280X, eventually had their fans fail and had to be RMA'd. Neither of them had any other hardware problems but that's something to consider.

(And to explain a little bit more: Some companies like EVGA and Gigabyte offer transferrable warranties, usually tied to the date of manufacture or the original registration date. These are big plusses for any used card but especially if it's been used for mining. I personally do think that mining takes a moderate toll on hardware, particularly the fans, and repasting them isn't a terrible idea either since that can get dried out from the constant heat.)

If you can talk the seller into a discount, a card with a dead fan could even be a good deal. A G10 bracket is literally $25 and AIOs start around $50-60. If you can talk the seller into a $50 discount then you just got a liquid-cooled card for $25.

Probably not worth it on low-end cards but 1070s/1080s/1080 Tis will strongly benefit from an AIO cooler especially for gaming (it's less of a gain with mining where you want to keep the power under control anyway). Summer is starting in earnest here and my temps are up, I'm thinking an AIO bracket might be a nice upgrade for my main gaming GPU.

If mining on 1080s/1080 Tis ever becomes a thing, it might even be a valid choice there to keep temps down and performance/efficiency up. A $75-per-card upgrade is too expensive to use with huge gangs of low-end cards, but if you have a powerful/expensive card then it might be something you could amortize in a reasonable amount of time.

There is also a fairly cheap ($67) full-cover waterblock for RX480s made by XSPC, the downside is that with a waterblock you are limited to one specific card (whereas the G10 fits virtually any reference board ever made) but it would be efficient (keep the VRMs nice and cool) and you wouldn't have to handle tons of individual radiators.

Paul MaudDib fucked around with this message at 05:41 on Jun 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Watermelon Daiquiri posted:

is there anything that uses seti or folding stuff as the algorithm for mining? it seems stupid to be wasting energy on pointless hashes when you can at least get SOMETHING tangible from it

Yeah, Seti and F@H and tons of other projects can use GPUs and there is a standard "BOINC" platform to let researchers queue up tasks.

Someone even made a FoldingCoin that distributes rewards based on F@H results, although it's not really worth anything it might help offset some of the power you spend.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

1gnoirents posted:

I found in my area (dallas) best buy of all places had considerable stock of 560s 570s 580s and 1060's 1070's (and 1080s and so on) at non-inflated prices... for the time being. I picked up a non reference PNY 1070 for exactly what it cost a few weeks ago there ($420). Not good, but not retarded either.

They were holding all the 580's and 1070's in the back now but the shelf spot is tagged to say that.

Best Buy with affordable computer parts? Truly these are the end times.

The first time I bought a SSD I didn't think about checking retail vs bare, and of course it showed up bare, no SATA cable. This was my first SSD and I really wanted to get it installed, so I went to Best Buy and after the salesman looked for 20 minutes (literally) he finally located one on a top shelf somewhere. Twenty-four dollars. I was actually offended by that, I left and found another computer repair shop nearby that sold me one out of their parts bin for like $10. Easy payday for the guy (if he didn't own the store I doubt that one made the cash register, either way he made a 400% profit on a $2 cable).

(LPT: if you computer a lot, even just for gaming, try to build a basic kit of spare parts to prevent mistakes like this. Spare SATA cables, some thermal paste, common screw sizes, spare video cables, etc. Otherwise it can easily be very expensive to debug a system in realtime, or you wait for parts. Other good additions: micro zip ties, velcro straps, maybe a USB 3.0/UASP SATA dock, 6pin-to-Molex converters/Y adapters, etc.)

All their GPUs are marked up to MSRP too, or even higher - although I guess that's cheap now that miners bought up the entire supply of low-to-mid-range GPUs. There now exists RX 560s and 1050 Tis, then 1080s, nothing in between. :lol:

1gnoirents posted:

edit: I've seen these things in the GPU thread a long time ago, but does anybody know much about these?

http://www.hwtools.net/Adapter/PE4C%20V2.1.html

Looks like... perhaps $80 total for a mPCIe connection. Presumably you can use a powerbrick or just a separate PSU altogether to run the thing. Turns out I have less spare PC parts than I remembered but basically unlimited access to hundreds of laptops so this is very interesting to me

I don't know those specifically, I brought up that I was looking at the EXP GDC Beast v8.0 a while ago in the GPU thread to try and use with my laptop's ExpressCard port. My conclusion at the time was that while the base itself was reasonable (~$30) the whole package (another $30 for the enclosure and $80 for the brick) was too much. Since I have an Expresscard slot I would basically be getting PCIe 1.1x1 speeds, i.e. probably losing at least 25% of my framerate.

If you're curious about gaming performance there are some reviews of the Beast floating around on Youtube/etc. General sentiment: not a speedster but better than using integrated graphics or really old discrete graphics, especially if you are playing DX11/DX12 titles. Maybe worth doing with like a 1060 or RX 480 or something but you definitely will lose some framerate. The good news is you can use GSync/FreeSync, though, easy mod for a laptop with a decent HQ processor.

Mining doesn't really care about bus speed (everyone uses 1.1x1 anyway for stability/max cards per board) so you're fine there. The brick is just a standard Dell laptop brick I think, so if you have access to those then you also can ignore that complaint too.

I think you will use more power for the laptops running 1-2 cards each than an ATX board driving a couple cards using risers, even a moderately old laptop is probably like 35W at the wall, the eGPU unit itself may use some as well. Using tons of individual laptop supplies may be less efficient than a single PSU (these adapters are 80+ gold right :v:). But if your limit is machines to drive it then it might be OK. On the other hand, if you order now then it could be a month+ before they actually arrives given usual slowboat shipping from China.

Paul MaudDib fucked around with this message at 03:42 on Jun 23, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

Thanks for that. Based on "For the median transaction size of 226 bytes, this results in a fee of 81,360 satoshis", am I right in saying that I'd need to "tip" 2 bucks and change to get my transaction thru a < 35 minute time window.

lol

much like with the growth in size of the Ethereum DAG file, Bitcoin eventually explodes because it holds a complete transaction record of everything ever and eventually regular people can no longer store a record and cryptographic signature for every pack of chewing gum ever bought on their hard drive, or at some point no longer even just withstand the load on their connection.

there are "disagreements" being "resolved" right now (brutal wars for control) between the Bitcoin Core devs and the miners. The core devs want Bitcoin to be a reserve currency where you resolve high-level transactions between other chains (aka make them rich), but they're pushing to fix a rather dumb flaw in the initial Bitcoin implementation (Transaction Malleability) that would enable off-chain transactions to work. Most people agree it's a dumb oversight that is now becoming a crippling flaw to scaling, but it would break some ASIC that is used by a huge percentage of the miners. The miners just want to be able to turn up block size, which works in a sense, but eventually you will only be able to hold this thing on your 48 TB RAID array - or you start trusting authoritative nodes or some other network consensus, which is also a centralization of power. But everybody mines on pools anyway so decentralization is mostly theoretical.

Bitcoin is kinda super hosed in general too, because the Chinese pools own some massive share of the hash power because it's all about who can make the fastest SHA ASIC and they do private runs of their designs. It's not remotely decentralized.

Ethereum meanwhile is about to probably undergo another fork over the whole POS switchover. I mean do you think miners are just going to give up this cash cow for decline in rewards and an eventual switchover to a totally different and unproven proof-of-stake system (that earns them nothing)? Ethereum classic is rather tainted because the DAO attacker made off with a huge fraction of the money supply (~15% IIRC) so there's no going back there. But do miners want "ethereum as it exists right now, making them lots of money"?

on top of that, the memory file that Ethereum needs to work eventually becomes huge too, and at some point it limits you to streaming over the PCIe bus (i.e. 16 GB/s or less for commodity equipment, which is 1/10-1/30th of current bandwidth), on an algorithm that is basically memory bottlenecked

so basically just like digital gold, real stable in value

Paul MaudDib fucked around with this message at 07:57 on Jun 24, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
oh yeah and I forgot, proof of stake is basically just paying interest to every address based on its current balance so this is intended to make bitlords into moustached rentier-class gentlemen of leisure in 50 years

this is different, it's the good kind of inflation :wotwot:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Computer Serf posted:

Why are you guys contributing to some scammy mining pool full of piss? :confused:

Why don't goons have their own pool for internet points?

lay in a course for redtext

Paul MaudDib fucked around with this message at 07:49 on Jun 24, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

eames posted:

Try this, some chinese company offering preorders for mining devices with a bunch of RX480 MXM modules, then hiking the price on already purchased devices and ultimately falling back to RX460 chips with late July delivery dates.
People apparently bought thosands of them.
Turns out they're "testing" and "burning in" customers devices in their own mining farm and start shipping them as they become unprofitable. :wiggle:
'course a couple of them burned down for good measure.

https://www.youtube.com/watch?v=sefJNg8Bv5I




:lol: it gets better:

https://www.youtube.com/watch?v=oXY2I6wJJO8

Hah remember the other day when I said "why don't they make mxm modules for rack servers"

but with buttcoin, and also not with the water cooling I conceived of when I placed my finger upon that monkey's paw

let me guess, since they use EPYC those are going to be in shortage too? drat it Maxxbot :argh:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Altcoins (and even Bitcoin itself) are a lot like penny stocks, it's easy to look at them and say you should have bought in while they were a penny, but there's a much greater chance that it goes to zero instead of ending up at thousands of dollars.

99% of altcoins, if you bought into them at launch they're probably worthless now.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Harik posted:

Nobody answered my question a page or so ago: Anyone else seeing claymore_zcash throttle their GPU way back for no apparent reason? It's sitting at 64c/52% fan and the limits are set to 75c. I know the algo is designed to require more resources than a standard ASIC design, so is it pulling from system RAM? If it is, I probably don't have my environment setup properly for it. Sometimes I get full speed, sometimes I get half, and I don't know why. It's all just buttpennies anyway but I don't like my hardware behaving in a way I can't explain.

Are you running under Windows? Use GPU-Z, from what I remember there is a "Bus Load" sensor that would tell you.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Fauxtool posted:

what exactly has caused the steep drop in profits over the last week. I was doing over $6 a day on a 1070 and now its barely $3. Still way in the black but troubling. Im not seeing a major drop in value so is it just all the added miners fighting for new blocks and only that? If nicehash wasnt so easy to get started they wouldnt never be mining, but neither would I

A combination of dropping value of both BTC and ETH, and difficulty spiking through the roof as miners struggle to absorb every last midrange graphics-card on the planet.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Twerk from Home posted:

What will these things be useful for in 2 months? Discount entry-level deep learning GPUs? PhysX GPUs? What's a no-output no-warranty 1060 6GB worth? $60?

An AMD mining card might be able to be used in CrossFire mode but since the 1060 doesn't have SLI fingers a card without display outputs is pretty much only going to be useful for compute tasks, whether mining butts, deep learning, or PhysX or whatever.

Paul MaudDib fucked around with this message at 20:19 on Jun 27, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

The 1070s are designed to fail gracefully if you overclock and they actually package the software to do it with the product. They literally market the ability to OC. Nothing he said it outside of the spec of the card.

Underclocked core/overclocked memory is actually how you are supposed to run cards while mining. That is the "being nice to them" approach.

Overclocking without overvolting or increasing the power limit is perfectly safe for the core. It's not as good as reducing the power limit and undervolting, but it's no worse than running stock. And there's actually no way to control voltage on memory at all, it just is what it is.

To be honest one of the really nice things about Pascal is that it actually doesn't even do well with overvolting. The card is basically power limited (and then temperature is a second factor), increasing the voltage will actually run you into the power and temp limits even harder and cause throttling. A lot of gamers actually run their cards undervolted because of this. Pascal is also pretty loving smart about stepping down voltage internally, if you pull back on the power limit it achieves the reduced TDP by not boosting as hard and cutting back on the voltage at the resulting lower clocks.

If you are gaming on Pascal, once you own a 1080/1080 Ti an AIO is one of the best upgrades you can make, because it keeps it nice and cool and lets it throttle up to max boost nonstop.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

1gnoirents posted:

If they have outputs, what exactly is the difference? poo poo if those are any cheaper than their "gaming" counterparts I dont see much reason to get a more expensive card with outputs you wont need

edit: I suppose its also protection for the manufacturer, if mining tanks then its not like they arent useful cards to sell

This is one reason I doubted the whole "mining cards" thing would ever happen. Why would NVIDIA/AMD give up a full-price sale of a second GPU and let you buy a discounted one instead? And mining cards with display outputs? That's literally just a price cut on whatever that card is.

There's a whooooole bunch of magical thinking on the part of AIBs with these mining cards. They make absolutely no sense to miners unless they are nearly functionally equivalent to a regular card, and at that point regular users are going to buy them too. If you gimp them too hard, miners are just going to buy regular cards as long as stock exists and only buy the gimped cards as an absolute last resort.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Comfy Fleece Sweater posted:

Any recs for a good AIO to keep my card in top shape? 1080ti

NZXT G10/G12 bracket (virtually interchangeable but G12 supposedly has easier installation) has the best compatibility with GPUs, then you need to pick one of the compatible AIOs off the list on their website. Most AIOs on the market are OEM'd from Asetek (due to their patent on waterblock/pump combo units) and anything they make with the given pattern should be compatible. I would probably just go for whatever's cheap (Corsair H55 or H75) but if you want to be fancy the NZXT Kraken series are supposed to be pretty decent, or you could go for a 2x120 or 2x140 unit like the H100i/H105/H110i. The G10/G12 bracket has a fan to blow air over the VRMs but most people recommend you put some copper heatsinks on the VRMs to keep them a little cooler (have not tried this myself).

There's also the EVGA Hybrid kits, which are a little more expensive and are specific to a particular model, but do look a lot more purpose-built instead of just being some poo poo cobbled together with a bracket. Note that the 1070/1080 models are different from the 1080 Ti models - the 1080 Ti has a second power connector and there's no cutout for that in the 1080 Hybrid kit's shroud. They charge a $50 premium for that model, or I guess you could take a hacksaw and do 'er up.

If you see yourself moving to a newer GPU in the near future then the G10/G12 gives you an upgrade path since it magically just fits pretty much anything.

Note that when you buy an AIO cooler (the cooler itself) do not buy refurb. The manufacturer warranty is important, if it explodes and destroys other components while under warranty then all the major companies have an informal policy of refunding anything else that's damaged as long as you didn't physically damage the cooler yourself. The warranty is normally 5 years but on refurb units it's much shorter, usually a year or less (often 90 days). AIOs have the potential to be an expensive accident, if they don't trust it, I don't see why I should either.

Paul MaudDib fucked around with this message at 21:45 on Jun 27, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Nybble posted:

I wonder if these mining cards would be good for Machine Learning too.

Definitely, assuming you don't need any of the Tesla-specific features like NVLink or RDMA-over-Infiniband then it's basically just a consumer Tesla card.

edit: does anyone here do machine learning? Do you know if machine learning networks are designed to scale across multiple cards, and if so can they do that reasonably well without the high-speed interconnects on Tesla (just peer-to-peer RDMA over PCIe)? Or is it single-card-only?

Paul MaudDib fucked around with this message at 21:38 on Jun 27, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

Depends on the tool you are using. In tensorflow you can, but you need to model your data in such a way that each gpu can take a particular set. Other packages like deeplearning4J don't even always use the GPU all that efficiently to start with. But yeah, a lot of Dev environments will use multiple GPU systems.

For production it usually just makes more sense to use cloud computing, that's what we do.

So you couldn't train one giga-model but you could do 4 little ones at the same time (f.ex), or a whole bunch of smaller ones? Not trying to make a self-driving car or anything, just curious. I did some pattern recognition back in college (classifiers, etc), AI was joked about as a dead field since Minsky, etc, poo poo's very different now just a couple years later.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

For a single model, you can parallelize by either having each GPU hold the entire model but train on different batches of data at a time, or having each GPU hold a separate chunk of the model and flow the same batch of data through. How easy it is to do this depends on the ML framework you're using.

This is lot easier if you're doing model averaging or stacking, you can just have each GPU work on a separate model.


Ignoring software ecosystem issues, do you see any benefit to Vega's compute uarch over an NVIDIA equivalent with an equivalent amount of capacity or connectivity for this or any other common GPGPU tasks? At least from the presentation slides? (it's launch day and Vega is MIA)

I think at one point they showed off M.2 NVMe onboard capability - but I'm not sure how 2 TB of relatively slow (4 GB/s) scratch or storage space really helps vs the 16 GB/s on the PCIe bus. NVIDIA has long-since supported RDMA to certain PCIe SSDs on the bus too, IIRC, what's the benefit of an onboard mount over RDMA, or over just having buttloads of system RAM, or is it just a way of getting more lanes to the card?

Paul MaudDib fucked around with this message at 03:05 on Jun 28, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

Software ecosystem is kind of everything, thats why CUDA is such a big thing. Honestly, I only use non-cloud for doing POC-type work, so I really only care about memory size and if its CUDA-compliant to what my team is doing. Hell, we'll sometimes just CPU work rather than deal with some loving weird rear end implementation that isn't working with the GPUs on the first try.

No I fully understand this, I just want to purely know if you see any advantages to what AMD's proposing in terms of hardware. "If the ecosystem were there, X might be nice."

Paul MaudDib fucked around with this message at 03:56 on Jun 28, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
FX is loving terrible, CMT isn't a real core, it's more like a hyperthread so you're basically running on the circa-2012 AMD version of an i3.

I mean it's your money or whatever but if you're gaming and you want more frames that should be first on your list, I loving guarantee you're horrifically bottlenecked on your CPU.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

Yeah the 1070 is a good mining card, seems to get more out of overclocking than my 1080 Ti does too.

Vega in its current form on the other hand is poo poo, only 30MH/s for Eth and it can't even run anything else yet.

That's also what a Fury puts out.

Ethereum is a memory-hard algorithm, memory speed isn't the only thing that affects it but it's a major contributor. A lot of miners underclock core and overclock memory. The fact that the processor is 1.5x as fast as Fiji probably doesn't matter as much as the fact that it's got the same VRAM bandwidth.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Side benefit to picking up a spare 1060 3GB for $150 a month ago: not only can I decide whether I need my 1080 for the individual game, but I also have a dedicated PhysX GPU for PUBG :v:

(apparently this really helps framerates for some reason, not sure how they could actually be doing enough physics simulation to matter)

Paul MaudDib fucked around with this message at 03:20 on Jul 3, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
If you're gonna sell cards, now is the time. A RX 480/580 makes what, like $3.50 per day now? Same for a 1070? A 1060 makes like $2? As the difficulty continues to increase, sooner or later miners are gonna come to their senses that $300-500+ for these GPUs is not sustainable at these payout rates, even ignoring power/etc.

(barring another massive run-up in the price of Eth/Bitcoin, which seems unlikely - if anything holding them seems really risky right now)

Some of us have viable use-cases for a few extra cards, but that basically means you're buying them at $400, whereas if you sell now and wait a few months you can probably pick them up for $100.

Paul MaudDib fucked around with this message at 23:56 on Jul 4, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Junior Jr. posted:

Should I buy an RX 580 now or wait till the Vega cards (not the Frontier one) come out? Maybe they'll have better rates than a 1060 6GB.

gently caress no, wait. I think the bubble will implode within like 2 months at the outside, the difficulty is doubling like every month or some poo poo which means rewards are halving. This is literally the most pants-on-head-retarded possible moment to buy anything except a 1080/1080 Ti and even then demand is driving those prices higher since they're literally the only cards right now that are bad at butts. Once the flood of cards hits the used market, retail prices will have to drop to be attractive, probably past the old equilibrium points pre-bubble.

Vega's gaming performance is going to be terrible for its size, especially given its power consumption, but 1080 at $300-400 would be OK, and FreeSync gets you a little extra savings. Not as much as you would think though, many of the cheaper FreeSync monitors have flickering issues/etc because they were never designed around FreeSync, AMD hacked it in after the fact, and it doesn't really work all that well. Good monitors are still expensive even with FreeSync.

Wait for the crash and get a 1080/1080 Ti or 1170 when Volta releases, unless you are already in the FreeSync ecosystem.

edit: oh you meant hashing, lol gently caress no. Vega sucks at Ethereum (RX 580-level performance) due to HBM. Maybe Vega cutdowns at $250 or so will be OK for mining but the bubble may well be over by the time the gaming flagship hits stores, let alone the cutdowns.

Paul MaudDib fucked around with this message at 00:09 on Jul 5, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Junior Jr. posted:

I already have one 1080Ti so I wonder if it's worth getting a 1070 or 1080 if their retail price drops.

Again, for what? Mining? Nah, when it pops it's over. It's already pretty much over even at pre-bubble prices for hardware, $3.50 per day with payouts halving every month means your $250 GPU will never pay itself back.

I bet that 1070s crash to $250 or less once the bubble pops, although since they're more efficient than RX 480s a certain number of miners will hang onto 1070s specifically.

But at some point soon (<6 months) Volta drops and gaming performance takes a huge jump anyway. I bet NVIDIA is making sure every card faster than the 1150 has GDDR5X so you can't mine on them.

Paul MaudDib fucked around with this message at 00:27 on Jul 5, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I feel like that old mathematician joke works well for Ethereum too, thanks to the difficulty doubling so rapidly.

An infinite number of miners walk into a bar and ask for their payouts. The first one gets a half bitcoin, the second one gets a quarter of a bitcoin, the third gets an eighth of a bitcoin... then the bartender sighs and pays the miners a full bitcoin, and says "you fellas really oughta know your limits".

edit: then the miners buy more GPUs to try and stay at 0.5 BTC per month, and die in a house fire

Paul MaudDib fucked around with this message at 00:32 on Jul 5, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

QuarkJets posted:

That's the objective of POS, but it also results in several attack vectors where you can use computational power to do things like bias the probability of being rewarded the next block in your favor or generating alternative blockchains in a way that leaves wallet software unable to determine which one is the "real" one (which has a number of uses, such as rewarding yourself all of the past mining rewards, or reversing future transactions). At this point POS is just an easier to exploit POW algorithm

Links? Haven't been following this closely but uh... I'm not exactly astonished.

Due to these kinds of issues I fully expect a massive hardfork when the Ice Age kicks in (which is supposed to be the switchover to POS). Like not only is it killing the golden goose for miners, but if you're a true believer then why would you want to test this kind of fundamental architectural change on a production system?

Ethereum Classic is the "immutable" blockchain where everyone follows the rules, the DAO attacker found a vulnerability and got paid for it. But it's also the blockchain where one attacker now owns 15% of the money supply and can crash the price at will, so it's pretty much tainted. The POS switchover is going to be the catalyst for a whole new Ethereum Classic hardfork, and this time it may be more successful. But either way, in the short term, Eth prices will plummet.

Paul MaudDib fucked around with this message at 08:08 on Jul 5, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Harik posted:

This is awesome, my $4000 NAS could earn me $6.50 a month! That's only 52 years before I start to turn a profit, quite the legacy to leave my kids. Why, I'd only be dead 12 years when they get their first check from it!

I'd say it's the stupidest altcoin, but no, it's just the worst one I've seen today.

Sounds like basically a private torrent tracker but with bitcoin instead of ratio.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Junior Jr. posted:

1. I'm rocking an Aerocool MOD XT 750W PSU and it's running my 1080Ti on full load using only 100W, while the card's heating up to 82°C on average, is this normal or should it be cooler than that?

Crank your fans to 100% if you haven't and then look at it again.

Pascal starts throttling at 80C (down to boost clocks rather than OC boost). By 90C you're down to base clocks. 100W is nowhere near what you should be pulling and seems like a strong indicator that the card is thermal throttling.

Even at like 80-90% power limit you should be well above 200W.

  • Locked thread