Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
EoRaptor
Sep 13, 2003

by Fluffdaddy

eames posted:

Why is Intel suddenly so successful in the IGP department after trying (and failing) so hard for so many years? :confused:

Intel wanted, for many years, to adapt x86 to graphics. They therefore didn't bother to develop more than a basic graphics 'block' based on a more conventional rendering asic. Now, they've finally abandoned the x86 graphics core, and with the need to catchup are probably more willing to adopt 'not invented here' technologies to get the job done. Intel's always had the design and process skills to do this, it's been purely a corporate choice not to.

Also, we are finally seeing a 'tipping point' where good enough performance is available at a low enough power envelope on the cpu side that it is creating new device possibilities. Graphics, as it relates to the appearance of responsiveness, is now a key differentiator.

This doesn't mean intel has solved their driver problems, btw, so be prepared for plenty of quirks for a while yet.

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy

booshi posted:

On my recently new gaming/HTPC I have a GTX 660. I have tried, but can't get 5.1 surround sound to work with it, so I've been using my mobo's optical audio out for now. This is a pain in the rear end though because it has to be connected every time I switch to the PC, since I only have one digital audio input and it's in use. Any ideas for getting 5.1 over HDMI working? I'm running Win7.

e: It currently only reports two channels when plugged into the TV, and it only reported 2 when I had an HDMI cable plugged right into my receiver.

nVidia claims their drivers work by reading the supported features status from the EDID information provided by whatever the HDMI cable is plugged into.

Because your receiver passes it through to the TV, it's likely getting the TV's stereo only settings. See if there is a way to adjust your amp not to do this? I don't know if there is any way to force the nvidia drivers into a specific mode?

EoRaptor
Sep 13, 2003

by Fluffdaddy

Sir Unimaginative posted:

The scariest thing is that those drivers are WHQL Certified!

WHQL is a series of multiple API level calls and sequences that must occur with a specific, measured result. For instance, must be able to draw a triangle of a specific shape that when compared to a master triangle bitmap is a 100% match. There are also tests where the driver is fed junk, and must respond as sanely as possible.

These tests, and there are millions of them, are run within a QA farm nvidia has in house, and if it passes, they can sign the drivers using a microsoft issued certificate they have. WHQL has nothing to do with the reliability of any sort of hardware, just that the driver has to meet all the checkboxes (features and performance).

It's not a perfect system, but it's a million times better than what came before, which was nothing.

But yeah, nvidia has hosed up a bunch of recent stuff.

EoRaptor
Sep 13, 2003

by Fluffdaddy

veedubfreak posted:

Also, it has pretty much been proven at this point that early XFX and Powercolor cards are in fact, true 290x chips that were just sold as 290s with a bios lock. For once it pays off to be an early adopter :)

This isn't the first, or the second, or even the third time AMD has done this. nVidia have also done it, but they tend to 'fuse' the chips never to be fully enabled again, regardless of BIOS, where AMD lets it slide.

For benchmarking, I'd say the obvious 3dmark run, but maybe look at recent benchmarks on anandtech or similar, and see what they are using and how? If you replicate those, you know how your system compares and provide something other people can also compare to.

EoRaptor
Sep 13, 2003

by Fluffdaddy

dpbjinc posted:

Are IPS panels not able to hold the image long enough for G-Sync to work, or is there some other reason it's TN only? I would think if IPS monitors could handle it, they would have done it by now and have made TN completely obsolete.

It'll work fine on an IPS panel? One hasn't been made or announced yet, but all the 'magic' happens in the decoder hardware that takes a DP/DVI signal and turns it into a voltage map for the LCD panel. Nothing about the LCD panel itself actually matters that much.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Grim Up North posted:

To be honest, the whole time I've been wondering if they really want to sell a board where you have crack open a monitor (made by a third party) that is not meant to be opened by end-users. That seems fraught with a whole lot of liability concerns and even if it's sold as a enthusiast, do it on your own risk kit, it could see it lead to negative PR.

Anyways, offer me a Dell Ultrasharp compatible kit and I'll be interested.

nVidia developed the tech, but couldn't really get any monitor makers to 'byte'. They knew it was a solution to a real problem, but manufacturers didn't get it.

So, they took an example 'gaming' monitor, and modified it to show G-Sync off, both to the OEM's and to end users (through reporters). They didn't want to wait for any hype they generated to die down, either, so they chose to sell this prototype 'mod' directly to end users, to keep the word out.

Asus is also building it into their monitors directly, but the price premium is huge, because it's a generic FPGA purposed for doing this, not a dedicated ASIC.

The second generation of g-sync enabled devices should only be a 30~50 dollars more. Once it's integrated into an ASIC, the cost will drop dramatically.

I hope a third generation appears soon after that is an official part of the DP spec, and can then be used by any video card that so chooses and has basically no cost premium. G-sync could benefit more than just gamers (anything that slides information around rapidly could be helped, cad/cam, financial, etc).

The eventual goal would be to un-bind refresh rate from display entirely, and switch to a spec that could push a frame and a 'draw now' command out as packetized data.

EoRaptor fucked around with this message at 20:05 on Dec 13, 2013

EoRaptor
Sep 13, 2003

by Fluffdaddy

Shaocaholica posted:

I'm actually a bit surprised that it took this long for G-Sync to be a thing. Its not like the tech is all that revolutionary and its not like the problem is new.

It's mostly because standards for this type of thing (DVI, HDMI, DisplayPort) all evolved in a time when either refresh rates really did exist (CRT) or standards were driven by content providers and consumers (TV and TV) that were locked into a refresh rate for a variety of technical reasons unrelated to a cable standard.

nVidia probably took a crack at this via the standards committee, but realized super early there was simply no interest in or realization of the problem. They chose to go it alone, which is fine at this stage.

It's my hope that this gets added to an existing standard as a new interface protocol (DisplayPort is ideal for this), and thus gets broad support and cost reduction. Decoupling the display protocol from the interface protocol (as thunderbolt does) is already seen as the way to go, so things should move reasonably quickly. I'd guess 2016 before it's an industry standard appearing on a wide selection of monitors.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Incredulous Dylan posted:

Oh, now that actually brings me over to team green. I've had a bitch of a time getting twitch to work with my configuration and this seems like an awesome way to do it. I've been reading so much on the R9 series, which I am now avoiding due to the noise/heat issues. My case leans towards acoustics rather than thermal and I don't want to mess with the great temps I've gotten it to. Gotta read through again and re-educate myself on what to swap my 680 out for.

I *think* AMD cards have most of the items needed to for them to implement a similar feature. They certainly have the video encoder hardware, they just need the frame buffer read back hardware and hooks to use spare vram for video chunks.

I do hope they are looking strongly into it, as nVidia is really grabbing peoples attention with how simple and low impact it is. I can record 1080p game footage without a measurable frame drop (GTX670) and it's a very nice feature to have.

Hopefully if AMD does do it, they can make the software as simple as nVidia's.

EoRaptor
Sep 13, 2003

by Fluffdaddy

HalloKitty posted:

Sorry, I wasn't saying HyperTransport was a vs NVIDIA thing, but more that Intel went its own way in creating QPI when HyperTransport would have done the job.

Nothing particularly fascinating, although if Intel had adopted it as well, maybe we would have seen HTX slots in actual use.

A bit of a derail, but QPI is extremely well modeled as a transitor set and an electrical interface within intel. They can easily place it alongside/inside practically any existing silicon and know 100% how it will behave. They are also free to tinker with it without worrying about compatibility, so another plus. Also NIH syndrome.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Agreed posted:

I have a question, do you like it? Does your setup play games well? That'd be cool.

veedubfreak's secret shame is that he plays MWO, a game so terribly developed that it doesn't support SLI. This rig might be some sort of huge e-peen, but the reality is he can *only* get just the tip in.


I also play this game

EoRaptor
Sep 13, 2003

by Fluffdaddy

Professor Science posted:

they did it because repi has been pushing for a new 3D API for years and years and years and AMD probably paid a lot of money (that's how developer relations works in the game industry, PhysX is the same)

This is about Microsoft no longer being interested in windows as a prime gaming platform. DX has stagnated on PC, yet the leverage provided by the prime gaming platform (xbox) makes using other API's a much higher development burden than performance returned.

Though mantle is NOT related to the xbox platform API, the underlying similarity between the graphics processors means that optimization applied to the xbox build can be used in the mantle build.

I don't know if AMD really hopes mantle will take off, but 10% for a major title that is having mantle added on after development was mostly complete isn't bad. This is also mantle's first appearance in a game, and developers will hopefully get better with it.

EoRaptor
Sep 13, 2003

by Fluffdaddy

veedubfreak posted:

What's something useful I can get from Newegg for 42 bucks. I got a gift card from them because they put the 290s on sale the day after I ordered mine. Expires on the 9th, so I'm trying to figure out what to buy that won't just be a waste.


http://promotions.newegg.com/premier/trial/index.html

You seem to buy enough stuff?

EoRaptor
Sep 13, 2003

by Fluffdaddy
So it looks like DirectX 12 will be a thing.

I'm not sure if Microsoft is really going to be able to get this off the ground. They seem to want to have it for mobile and desktop platforms, which risks making a fat API that is difficult to learn and starved for features.

Also, I'd put odds on them making it Windows 8+ only, even though this would effectively make it stillborn. They probably aren't going to learn from the last times they've tired this.

EoRaptor
Sep 13, 2003

by Fluffdaddy

deimos posted:

Specially since they are pushing for 9 to replace XP. I don't mind the shorter dev cycles if it means more cutting edge features as long as they don't gouge for updates, hopefully MS makes 9 a sub $60 upgrade.

Yeah, they really need to get off the price horse, especially for upgrades. Most new windows sales are via OEM, so lowering the upgrade price and turning over older versions is probably a net win for them.

Yes, it's more complex than a 60 dollar game, but you aren't going to be able to ignore the change in peoples perceptions of what software should cost in the face of Apple. Sell subscription services a la OneDrive backup and take the hit on the core OS.

EoRaptor
Sep 13, 2003

by Fluffdaddy
Well this happened
http://arstechnica.com/gaming/2014/03/facebook-purchases-vr-headset-maker-oculus-for-2-billion/

I don't get where facebook is going with a bunch of recent purchases. Hopefully they don't google this company: stop all its projects, re-assign or layoff staff, technology is never heard from again.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Shaocaholica posted:

This is relevant from the monitor thread:


If this picks up speed, it would be lovely if Nvidia deliberately didn't support it to sell more gsync parts.

nVidia is hemming and hawing about it, but it's part of the displayport 1.2a standard, so they won't be able to use the displayport branding or pass certification if they don't implement it. AMD seems to already have the hardware to support DP1.2a with some of the 2x0 generation cards, so likely only need driver updates. nVidia might have the same thing, but whether they choose to expose the feature is another story, they might make it new cards only.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Shaocaholica posted:

So if they limit it to new cards, that pretty much shrinks their g-sync market down to just 7xx owners? I mean, why would anyone with a 8xx or newer card want to buy into gsync?

I doubt even nVidia knows. They sunk a lot of money into g-sync, only for AMD to pull the rug out with freesync, and for VESA to accept that version because it was stupidly easy to change from optional to required in the specification. This is why some monitors can be upgraded, the display controller makers were targeting mobile applicaiton with their asic, so the hardware just needs to be turned on. G-sync is dead before product launch and nVidia knows it, but backing down is a huge loss of 'face', so company politics is probably going to drive them off the rails for a bit.

I guess R != R, economists, back to the drawing board.

EoRaptor
Sep 13, 2003

by Fluffdaddy

warcake posted:

On the VRAM front, does having 2 cards in SLI double the amount of VRAM you have or is more of a series thing? I have no idea how it works.

Each card must have identical memory contents, so the total memory available for textures (etc) doesn't increase. No, it provides no additional memory.

EoRaptor
Sep 13, 2003

by Fluffdaddy

HalloKitty posted:

That's not only awesome, but it makes me wonder if one monolithic GPU that was physically enormous and the power of a normal GPU today could have been made 20 years ago. Obviously the resistance through the chip that size would be insane so the heat output would be furnace-like, but if it was cooled with liquid helium or something, the resistance would fall and maybe you could have had the future!

The big problem here is that, as you scale up the size of a transistor, its switching speed slows. This is because electricity is not instant, and it takes time for electrons to move around. The more electrons you are moving to change the state, the longer that state change takes. Switching faster is based on reducing the number of electrons needed through reduced size, or increasing the flow of electrons through increased voltage.

You also run into problems with moving information around, as the timing of information moving between elements on a chip is also strictly controlled, and changing the scale changes the timing. Related to that, is that the longer the timing, the less information you can send in any given time period. A larger chip simply could not move enough information around quickly and reliably.

In answer to your question, no, it would not be possible to take any modern chip design and make it on an older process, it wouldn't function.

EoRaptor
Sep 13, 2003

by Fluffdaddy
I love that it has 3 DisplayPorts, but they are 1.2, two versions behind the current spec, and expressly locking out any Adaptivesync/Freesync support in hardware. Because nVidia.

EoRaptor
Sep 13, 2003

by Fluffdaddy

All the specs I'd read stated DP1.2, but it would be nice if they did 1.2a at least. As for freesync support, wccftech is seemingly a little eager to publish without fact checking, but it would be nice if they did support it. Now we just need a freesync monitor.

EoRaptor
Sep 13, 2003

by Fluffdaddy

r0ck0 posted:

Starting to sound like a broken record.

Why don't more people recommend the gigabyte card? With 3 display ports, 2 dvi and 1 HDMI, proprietary monitor connection with support for four 4k monitors, comes equipped with a back plate, heatsink also makes contact with the VRM for additional cooling. It seems like the best card out.

http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_970_g1_gaming_review,9.html

This guy shows the gigabyte card is the only one that doesn't throttle when overclocked.

http://www.overclock.net/t/1516121/gtx-970-comparison-strix-vs-msi-gaming-vs-gigabyte-g1

Gigabyte has, in the past, been a very cheap producer of video cards and motherboards. I agree it's odd, but a history of total lemons every so often have made buyers wary.

It also doesn't help that they seem to be pricing at a premium over ASUS and MSI, which is difficult to swallow considering their history and closer adherence to the reference PCB (a cost saver).

I own a Gigabyte 670, and there is nothing wrong with it, and if you feel the 970 is the right card for you and you want the features Gigabyte offers, buy it. They have generally decent warranty support and the card itself is shaping up well, but many people will be guided by history, whether it is still accurate or not, and look to a brand that currently has abetter reputation.

EoRaptor
Sep 13, 2003

by Fluffdaddy

r0ck0 posted:

That worked well for the EVGA supporters, oh wait. Just saying you can't always judge a new thing by the performance of the old. I was planning on getting an evga since my 560 evga has been great, overclocked well, but not this time. Perhaps gigabyte has learned from their past cost saving mistakes.

I was only commenting on the current disinterest in Gigabyte, not providing any facts about which video card is best. nVidia's greenlight program has really helped card makers not be complete poo poo, and the brand as a whole has benefited, so you a very unlikely to get something that doesn't perform to specification. if you need the extra display ports and don't need the comfort of the goon hivemind, buy it. You will only change the history of a manufacturer by reporting your experiences with it, not by complaining that current experiences don't seem to merit current opinion.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Star War Sex Parrot posted:

AMD got the win for the Retina 5k iMac. I was really hoping for Maxwell. :(

http://forums.somethingawful.com/showthread.php?threadid=3372494&pagenumber=280#post436364224

Probably too late in the dev cycle to switch tracks to maxwell and make their production deadlines.

EoRaptor
Sep 13, 2003

by Fluffdaddy

John DiFool posted:

Anyone want offer thoughts on 670 vs 970? I'm running 1440p and the 670 does pretty well, but will probably start to show its age against newer titles.

970 is clearly better, both in performance and energy/heat. However, if you aren't running into games you can't run acceptably now, wait because it will always be cheaper to buy in future.

Once you find a game where you really think it looks better with those extra features turned on but it just isn't fast/smooth enough, then go out and buy. A lot of people in this thread really do want 'the latest shiny', and even if it's a great product with great performance, they may not have actually needed it right then.

EoRaptor
Sep 13, 2003

by Fluffdaddy

SlayVus posted:

I'm going to test this when I get back home. I have the msi golden, so it would suck if there was a hardware defect with my $400 970. I wonder if it is only just a driver problem though.

The Tech Report link pretty clearly explains what is happening and why. It's not a hardware defect, just an effect of the way the 970 is designed.

http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980

EoRaptor
Sep 13, 2003

by Fluffdaddy

AVeryLargeRadish posted:

Errr, but that is about fill rate, they are talking about the 970 only using 3.5 of it's 4GB of onboard memory while the 980 uses it's whole 4GB when needed.

Ah, sorry, for me that made sense, but I'd made some assumptions to get to my conclusion. Each block of RP's/shadors/etc on a GPU probably has a fixed set of memory addresses they can access quickly, all offset from each other. They also have a nearby shared space they can read textures, etc from quickly, and another nearby space they can write to quickly.

All of this is fixed in the logic of the particular block to make memory access very quick and conflict free. With the 970 have missing blocks, there is going to be chunks of memory to which no blocks have quick access. A block can read and write there (because any block can probably read and write anywhere if it needs to), but there is probably an arbitration process where it has to check for existing memory locks and make sure it doesn't step on anything, and I bet this arbitrated path is extremely suboptimal.

This is all part of the optimization tradeoff, and it makes sense to do so. I bet the whole driver stack is optimized to know about this and load textures etc and assign rendering tasks into specific memory locations and gpu compute blocks.

Now, should this have been disclosed and these cards sold as having only 3.5GB of memory? Ehhh... you'd be hard pressed to make the case, as the GPU can access the memory, it's just so slow the drivers are set to almost never allow it. I don't think it is hitting performance that hard, even for games that 'want' 4GB of texture memory.

EoRaptor
Sep 13, 2003

by Fluffdaddy

BurritoJustice posted:

Say whatever you want about them not implementing 1.2a, but that doesn't change that the whole "everything is FreeSync and the module is just $$DRM$$" thing is patently false. Mobile GSync still is not FreeSync, and still they cannot just issue driver support for FreeSync/AdaptiveSync on their current GPUs.

But that is exactly what just happened?

The current gsync branding is from a leaked driver that enables the feature on an already existing laptop, that has no special hardware outside of eDP support.

Installing this driver turned on adaptivesync and gave it the branding of mobile gsync.

Now, this won't happen on desktop cards, because despite DP 1.2a and DP1.3 both being finalized and having chips available prior to the release of the 9xx series, it wasn't added as a feature to those cards. Even when the 960 came out nearly a year later, it still wasn't added. I wonder why?

EoRaptor
Sep 13, 2003

by Fluffdaddy

jkyuusai posted:

Just to verify, you do realize that the laptop didn't work in 100% of the cases that a display with the GSync module has been shown to work, yes? There's issues with flickering and sometimes the display completely blanking altogether at low frame rates. These issues are reproducible on the ROG Swift, which has the GSync module, but it's noted that they're much less severe.

Link

I'm going to speculate, but I'd guess those situations are either when a vblank doesn't arrive before some other hardware 'timer' runs out and forces the panel to refresh without any valid data, or arrives when the panel controller cannot accept it.

The first is probably software fixable, with a maximum time between vblanks being set. The second is trickier, and probably comes about from the panel getting a vblank right when it is doing something else that it cannot interrupt, and the result essentially resets the panel to 'empty' until the next frame arrives. This might not be software fixable, but improved controller design+firmware can probably address it. A minimum time between vblanks or between end of data and the vblank might help.

The GSync hardware module had a bunch of onboard cache ram, probably to allow it hold enough data to be able to avoid a no data situation. Since it still exhibited the problem sometimes, it can be argued that the fix is trickier to get right than has been publicly talked about.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Well, the actual killer was Vulkan. It's also, honestly, a much better solution.

There is also the debate that without Mantle showing the performance overhead of outdated standards/implementations, nothing would have been done.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Zero VGS posted:

I don't think there's much debate, Microsoft would have absolutely kept their thumbs up their asses if Mantle hadn't showed up. I think AMD can be proud of shaming everyone else into taking optimizations seriously even if competitors are uh, picking up the Mantle, so to speak.

The Xbox 360 and Xbox One both featured a hugely modified version of DirectX that had many of these optimizations already done, so Microsoft clearly knew there was an issue and how to fix it. I think they also knew the cost of bringing those changes to windows would be pretty steep if they had to upgrade existing versions.

My basic thinking is the development looked like this:

1. MS develops improved DX for consoles
2. AMD, the supplier of console CPUs and GPUs for the then current and the future xbox, says they should port that to windows.
3. MS says they aren't willing to discuss that at this time
4. AMD takes everything they learned from during the xbox driver development and begins the same process on their windows drivers.
5. AMD partners with some dev studios, and hammers out how to access these driver improvements to benefit games, brands it Mantle at some point.
6. Mantle is publicly announced along with game support
7. Turns out, MS and nVidia have been working on DirectX improvements for windows for a while now, covered by an NDA that excluded AMD.
8. MS announces the 'new' DirectX version as part of a new Windows version
9. nVidia instantly (in hardware timeline terms) announces DirectX 12 support for their brand new video cards, even though the spec isn't final and there is not even a beta sdk to test with.

The timing involved is just to weird. There is no way MS could turn around an announcement with feature set so quickly if they didn't already have it in the pipeline, and nVidia was really, really quick of the mark to guarantee DX12 compatibility for the 9x0 series cards.

nVidia's and MS's action here really smell like AMD was being deliberately shut out to try to cost it an entire hardware cycle without DX12 support. AMD has plenty of problems, but this feels like just nasty behavior against them.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Wiggly Wayne DDS posted:

Do you have some sort of blog where I can read more of these... creative interpretations of technology?

I prefer to keep my madness purely in random forum posts.

And I also missed the AMD list of supported cards announcement. It hasn't appeared anywhere on their branding, where nVidia puts it everywhere, which is a bit odd.

I still wonder about why AMD went faffing about with Mantle if they knew MS was making DX12. It just seems like, at some point, MS simply didn't tell them.

EoRaptor
Sep 13, 2003

by Fluffdaddy

BurritoJustice posted:

Hey I can help a bit more on this. I run a single 980 at 1440p 100hz, and mostly play modded falloutNV. I have ~200 mods merged into 136 plugins (FNV has a limit of 138 or so) along with tonnes of 4K/2K textures etc. I also run a very intense ENB config alongside SweetFX. My 980 is at 1420 core 8000 mem, and I average around ~50FPS with regular dips into the 40s and less regular dips into the 30s in intensive scenes. It's playable but I'd like more performance for sure. I'm waiting for the new amd cards to come out to see how that goes before I blow a chunk of money on a second 980. I do also tend to encounter CPU bottlenecks from time to time with my 3570K at stock.

Honestly, I bet you are far more CPU bound than GPU bound with that game. A pipe-dream DirectX12 version of the fallout:nv gamebryo engine would probably double the performance of that game.

EoRaptor
Sep 13, 2003

by Fluffdaddy

veedubfreak posted:

As with previous generations, doubling the memory on these cards is pointless. They are already limited by the actual ability to push pixels not the memory.
With the announcement of the announcement in June from AMD and the Titan X you might keep an eye out for used 980s at this point.
I picked up my pair for 450 each. Ended up splurging on a new motherboard yesterday. Open box Asus Maximus 7 Formula with 2 year warranty from Microcenter. Looks like whoever bought and returned it didn't even touch the thing. She's leak testing right now :)



I think this should scratch the itch until they finally get below 28nm.

Ah, static sensitive electronics on nylon carpet, what a good idea.

Dude, at least put the bag in came in under it.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Zero VGS posted:

I would argue that the stacks of $295 GTX 970 Amazon Warehouse cards are the "mid-range": http://amazon.com/gp/offer-listing/B00NNXVPS2/

Especially if you can bug Amazon and get them to cough up a Witcher download code which would probably subsidize you another $30-40.

I'm not bothered by used cards or whatever, save for the fact that the buy 970, return 270 (or whatever) scam is probably still going strong. It's just not worth the hassle.

Card makers should embed an RFID chip under the gpu itself or something else on the PCB that is difficult to remove, so the card can be scanned on return and it's exact details clearly matched against what was purchased.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Buying something, taking it out of the box, sticking something else in the box (like rocks even) and returning it (sometimes even re-shrink-wrapping it) is along standing tradition with computer parts.

I also commented on how card makers could stop this scam cold.

EoRaptor
Sep 13, 2003

by Fluffdaddy

repiv posted:

Nah, 90 degree motherboards are the way of the future. Every manufacturer clone the Silverstone FT02/FT05 design TIA

NLX?

EoRaptor
Sep 13, 2003

by Fluffdaddy

Twerk from Home posted:

They could still run 2160x1200 @ 90hz, just have it be upscaled 1080x600 @ 30fps.

Tons of console games run at 600-ish vertical lines of resolution and upscale, and that should work fine going forward. Maybe to make up for the width they may have to drop down to ~400 vertical lines.

Upscaling resolution will be fine, but frame interpolation of any type is going to create motion sickness.

The time between turning your head and the view shifting MUST be lower than the brains tolerance for disparity between what your eyes see and what your balance (ears) report. 90hz seems to be the target here, but I bet that is the minimum standard, and being able to do low persistence out to 120 or 144hz would probably be a noticeable improvement.

Oculus should probably push for an over scan spec, where additional resolution data is pushed up to the headset, which can then slide around the viewable area to provide an extra frame or two of time while the video card pushes up the next frame. This would also let the headset smooth out jitter a bit without needing to tax the host computer.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Bleh Maestro posted:

Gay. They need something between the 960 and 970 way more than that.

I think nVidia is probably happy with the contrast between the 970 and 960. The 970 is enough better to tempt people into splurging for it, but not so much pricier that people are discouraged about losing out if they chose not to spend and go for a 960.

A 960 Ti will eventually show up, but nVidia isn't in any rush, especially with the competition flailing around so obviously. 4 to 6 months, maybe?

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy

veedubfreak posted:

Yep. I'll be rebuilding Friday night.

Before you do, you should take a glance at the event log. It's almost always useless, but the symptoms you describe could be caused by a faulty HD/ SSD, or a bad sata cable, and that will be recorded there.

The event log will show any read/write issues windows had with the drive. You could also use a SMART reader to try and decipher what the drive thinks is going on.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply