Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Boten Anna posted:

Since this was the 90s, by both of them do you mean you would actually open up your computer and swap them out? Or since this was the 90s, do you mean they both just sat on the uniform AGP/PCI slots and you just plugged in the one you wanted to use at the time?

The Voodoo and Voodoo2 didn't replace the other graphics card in the system; they were 3D-only devices that sat between your "main" video card and the monitor with a little loopback cable (this is why, if you look at them, they have two VGA ports: one in, one out). When you used the Voodoo card, it made a little clunk thanks to some mechanical relays, disconnected your other video card, and took over the VGA output. When you stopped using it, it went "clunk" again, and turned into a dumb bunch of wires that just passed the other video card's signal through.

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Boten Anna posted:

Does it work basically by having each card draw different elements on the screen and them merging them together, using hardware or even just the raw video signal somehow?

Is it possible in the future that the connection between the two will be more of a... for a lack of a better word, logical link that basically just uses additional cores to throw more hardware at the rendering similar to a multi-core CPU?

I'm probably phrasing this in all kinds of terrible ways what with having an only rudimentary understanding of how any of this works under the hood.

To your first question: yes. There are several different rendering modes, but the end goal is to split work between the GPUs at a high level and then recombine the images. They can switch off alternate frames, or split the screen (horizontally, vertically, or in a checkerboard pattern) and each contribute to the same frame.

It's certainly possible on a theoretical level to couple GPUs together closely. After all, a modern GPU is basically a big pile of dumb processors. The problem is bandwidth. On the same silicon die, coupling all those processors together and to the support hardware they need (memory controllers, rasterizers, etc) is difficult but certainly possible. On the same circuit board, it's exceptionally difficult. AMD tried to implement something like this a while back, in order to improve the performance of "SLI on a stick" dual-GPU cards, and dedicated a bunch of die space to it - but nothing using it ever came to market, and the feature was dropped in the next generation. Across a PCIe link and SLI/Crossfire bridge, it's a "you're loving kidding, right?" problem - there's just not enough bandwidth. And, even though very smart people are continually working to develop higher-bandwidth interconnects, bandwidth requirements for GPUs keep going up, too. It's simpler, cheaper, and still works fairly well to just duplicate memory contents and ask each GPU to contribute a chunk of a fully rendered frame.

Dogen posted:

That really didn't explain it at all!

The wikipedia on SLI is pretty good, though. Apparently it's totally different now and works in a different way. The abbreviation is the same, but what it stands for is different. Oh, well.

The theory behind how it works is actually pretty close: each card renders a lower-resolution image that contributes to the whole. It's just not broken up by scan lines any more, because modern video cards don't have the same passthrough access to video output (and convenient sync signals) that the Voodoo2s did.

The name is Nvidia just blatantly cashing in on an old nostalgic trademark, though.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

My favorite low-profile packaging wackiness will probably always be Sparkle's lo-pro 9800GT:



You can't really see it because of the VRM heatsinks, but it had a PCIe power connector and ran at full stock clocks. The GPU might not be so impressive any more, but that board design is some engineer's magnum opus.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

reboot posted:

So, I'm looking at getting the fabled 690 GTX. Although, I'm using a LGA 1366 socket motherboard due to the fact that I'm running an i7 990x. After doing some research it seems that the 690 will work on PCI-E 2.0.


Now, my main question. Will the 690 GTX work with a GA-X58A-UD7 motherboard. If it won't, would someone recommend me a LGA 1366 motherboard that will allow it to run?

Thanks.

Yes, it will work - but why are you doing this? Unless you're in some crazy one-in-a-million situation, you're better off buying a 670 and putting the other $600 or so in an envelope. The second the 670 starts being annoyingly slow, open the envelope and buy whatever's top-of-the-reasonable-line.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Endymion FRS MK1 posted:

Apparently now the 7950 gets turbo too.

http://www.rage3d.com/index.php?cat=75#newsid33993264

What exactly is the advantage? Why not just overclock it to the turbo clocks and not worry about the dynamic change?

Because different loads can draw different amounts of power. The classic example is Furmark, which usually draws more power than just about any game. With dynamic clocks, it's possible to run up the clocks when there's power and thermal overhead, but keep from burning up the chip or VRMs when the load draws more power for a given clock.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Dominoes posted:

Any reason to buy 2 680s over a 690? I'm leaning 680s because the 4gb models have more effective ram, which I've heard is useful for multi-monitor resolutions, or will be in the near future.

Why are you buying a thousand dollars worth of consumer GPUs in the first place?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Tezzeract posted:

It'll be hard for Sony to do backward compatibility because they're weaning off of Cell processors. Microsoft could have an easier time. But yeah streaming is one way to handle backward compatibility.

Through the funny thing is that most console gamers don't seem to care too much about BC. Sure it's a way to beef up the library, but console owners bought the system to play "current-gen" games. And are also more than happy to get HD remakes.

Microsoft and Sony are both going from Power to x86, and the 360's GPU has some funky extra features that wouldn't be fun to emulate. On the 360, a little bit of very fast RAM (just enough for a 1080p framebuffer) is integrated directly into the GPU, which allows for nearly-free antialiasing and a few other tricks.

Streaming probably isn't going to happen. It requires a lot of infrastructure, it makes MS's/Sony's system performance depend on uncontrollable outside factors, and it gets into a lot of thorny legal issues. Instead, look for software-based backwards compatibility using high-level emulation. Microsoft already took this approach with their last major architectural transition, and Sony made some use of it on the PS3 before they dropped backwards compatibility entirely.

It is going to be a selling point, though. Yes, now that we're firmly in the 360/PS3 generation, people don't give a crap about being able to play Xbox/PS2 games. But, when a console first comes out, it's a feature people want. The hardcore gamer types are going to line up on launch day regardless, but for someone who's on the fence, "you can trade in the old one for some store credit and get a discount, but still play all your old games" is a draw.

bull3964 posted:

Devs yeah, but there's only so much graphical information that can been seen on an 40" 1080p screen at 8 feet.

Smoothing out the framerate is a definite plus. However, I'm just pointing out that last gen jumped from SD to HD and this gen is going to stay on HD so one parameter isn't changing as significantly. It's going to take a lot for people to say "We couldn't play this on the current generation of consoles?"

Well, for one, the current generation of consoles has a hard time running at native 1080p. The standard trick is to render the 3D scene at 720p (or sometimes even less!), scale it to 1080p, and then put 2D UI stuff over the top of that. There's plenty of room to play with GPGPU processing; neither current console supports it. And, in the rest of the system, more RAM means nicer textures, larger levels, and so forth. A faster CPU (and that GPGPU integration) means more sophisticated procedural animation and more complex gameplay. There's plenty of room for improvement.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Factory Factory posted:

Also, good news, everyone! Seems that Adobe Premiere Pro CS7 will support OpenCL for the next version of the Mercury engine.

E: And it looks like Virtu may be pretty much obsolete - According to the release notes, QuickSync and OpenCL now works in Windows 8 even when you have a discrete video card.

Also looks like Intel has delivered a giant gently caress you to AMD's HSA, because Haswell's GPU will include a proprietary DirectX hook for unified CPU/GPU memory addressing - you can pass the CPU a pointer for the GPU's RAM.

More good news: Intel's loosened up the licensing restrictions on QuickSync, which will hopefully mean we get one step closer to a small, cheap, low-power NAS box that can transcode video on demand to whatever the client wants.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Urzza posted:

So, I'm having a little bit of a problem understanding AMD duel graphics. As far as I can tell, the integrated graphics on AMD CPUs can be Crossfired with some ATI cards. It's it worth my time to look at this if I'm making something for playing games on? Does anyone have any more info on this? AMD's site hasn't been a big help beyond "Yes, it's a thing you can do."

No, it's not worth your time. It only gives you a performance boost with really low-end graphics hardware on both sides of the equation, and you'll get a much bigger boost from a modest GPU upgrade.

Also, AMD CPUs aren't a good buy right now for various reasons. See the parts picking thread for more.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Radio Talmudist posted:

So I popped in a new fan for my side panel to cool off my rig. It's a cooler master fan with a 3 to 4 pin adapter. It looks like this:

http://www.frozencpu.com/products/1...mqOMxoCGXXw_wcB

Am I supposed to connect both the 2 ends on the left to molex connections from my PSU, or can I just connect the 4 pin connection?

If you can plug the fan directly into the motherboard, do that.

Otherwise, what you're posting doesn't make sense. That's a weird specialty cable designed to go from a variable-voltage fan controller straight to fans, and the one connector on the left side shouldn't hook up to anything coming out of your power supply.

If you're looking at a more standard adapter cable with male pins on the small three-pin end and both male and female four-pin Molexes, it's just a passthrough. Back when 4-pin Molex connectors on drives were common, sometimes you'd run out of power supply outlets. Low-power devices like fans came with male and female ends so you could piggyback off of the power connector going to your DVD drive or whatever. You shouldn't even be able to plug both ends of the connector into your power supply. If you do need to use the adapter, just plug the male molex end into your PSU and leave the female end dangling.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.
All right, terrible motherboard gimmick time!

Here's one of the all-time greats:


e:

SwissArmyDruid posted:

DP does output audio, and can use a passive adapter to HDMI. Maybe a bad cable, yes, but there are a couple other things that could be causing problems.

Eh, not necessarily. DisplayPort and HDMI are two completely different signal standards; some display adapters can just output an HDMI-compatible signal over the DisplayPort pins that an adapter can use. I don't know that particular card, but I'd try an HDMI-to-DVI cable. Unlike DisplayPort, HDMI and DVI are almost identical electrically up to 1920x1200/1080 levels of bandwidth, so passive adapters almost always work.

Space Gopher fucked around with this message at 05:52 on Mar 2, 2015

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

veedubfreak posted:

I thought tearing was mostly when the video card is pushing frames faster than the monitor can keep up.

Not quite. It happens whenever the framerate and monitor refresh rate are out of sync, and it doesn't matter which one is too slow or too fast.

To understand why, you have to go back to old-school displays. GPUs and monitors are very simple-minded when it comes to displaying pixels: they almost always update each pixel left to right, top to bottom, in lockstep with a defined clock. This is absolutely necessary when you're talking about controlling the intensity of three electron beams in a CRT that have to keep moving in a fixed pattern, at a speed that's slow enough to not overwhelm the timing hardware but fast enough to keep the phosphors lit. It's not necessary with an LCD, but the old standard generally works, and everything works with it.

At least, until the GPU raises its hand and asks, "so, what happens when the monitor starts refreshing the top-left pixel, I have to follow the clock, and I don't have a brand-new frame ready right this instant?" It has to start out with sending the old frame, because that's all it's got, but when the new frame is ready mid-refresh, there's a choice. Start displaying it now, and the monitor gets the freshest possible information on the screen right away, but the user sees a tear in the image - the top half of the monitor shows the old frame, and the bottom half shows the new frame. Since the GPU's not locked to the refresh rate and is just throwing new frames down the pipe whenever it's got them, this is "vsync off."* The other choice is to keep displaying the old frame for the entire refresh cycle, then start showing the new frame. You get a beautiful tear-free image, long after it was first rendered. Since the GPU is paying attention to the refresh rate and syncing new frames to it, this is "vsync on."

With G-sync and Freesync, the monitor and GPU are freed from the tyranny of that single clock. Instead of having to either tear the image or wait for an entire frame, the GPU can just tell the monitor, "hey, hold on for just a few more milliseconds, so I can get the next frame ready." The instant the GPU puts the frame together, it can tell the monitor to start refreshing again. Presto, no tearing and no delay.

*the "v" stands for "vertical," where the pixel getting refresh data makes the vertical jump from the bottom of the screen all the way back to the top. Even DVI-D and HDMI allow for a pause here, so if there's an electron beam it can make its journey back to the starting point; HDMI uses this "vertical blanking interval" as a convenient place to pack audio data down the video wires. Horizontal sync is also a thing in CRTs, where the pixel getting refreshed goes from the far right of the display to the left of the next line, but we don't care about it here unless we're talking about how awesome old-school SLI setups were.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Khagan posted:

GPU mixing is confirmed to be a thing with DX12.

http://www.guru3d.com/news-story/microsoft-confirms-directx-12-mix-amd-and-nvidia-multi-gpus,2.html

Personally, I see this going the way of the LucidLogix Hydra, but maybe Microsoft can get it to work.

Haha, no it's not.

The "proof" is a forum post of a screen capture of a helpdesk conversation, with a tech who almost certainly has zero access to devs, saying something incredibly ambiguous.

Most likely, what she's talking about is the ability to run several cards in a system at once, from different manufacturers. Not in any SLI/Crossfire configuration between them, mind you, just in the sense of "they're all displaying a desktop." This is functionality that was present in XP, went away in Vista due to the changed driver model, came back in Win7, and has been part of Windows and DirectX ever since.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

That turns a displayport output into HDMI. It doesn't work in reverse, to turn an HDMI output into displayport.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

n.. posted:

The whole thing is awkward. These events are so lame.

It's a nice little memento mori kind of thing. No matter how wealthy and powerful you might be, you're not a rock star, and there are good reasons for that. You can buy cool things, but you can't buy cool.

See also: Paul Allen.

THE DOG HOUSE posted:

lol wtf was that

also



hmmmmmmmmmmmmmmmmmmm i dont see where my fps fits in those 5 categories mr jen

Say you own a business. Would you rather sell to nerds living in basements who spend a few hundred bucks every few years, or Google, national labs, and startups swimming in VC money?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

ChesterJT posted:

Need some help with PCIe bandwidth. I've been googling all night and still feel like I'm missing something. I have an arcade running off a core2duo 3.0ghz. The mobo only has a PCIe 1.1 x16 slot, currently with a radeon hd 4670 (pcie 2.0) in it. I was looking to upgrade the video card to get some performance boost on the cheap without upgrading the whole thing. I understand 1.0-3.0 are all compatible, it's just a bandwidth limitation. People talk about bandwidth saturation and how to calculate those but I feel like I'm doing something wrong.

So to figure a card's theoretical bandwidth max it's mem clock x 2 x bits. For that hd4670 (here) it shows an effective memory clock of 1600 (which is doubled, so it's really 800 correct?) and 128bit. So my math says it has a bandwidth of 204.8Gb/s, or 25.6 GB/s. Or you take this gtx 1080 which shows effective memory clock of 10000 (quaded, so 2500 x 4) and 256bit so 2,560Gb/s or 320GB/s.

Assuming all that math checks out, how do they not consistently oversaturate PCIe bandwidth which shows at 4GB/8GB/16GB for 1.0/2.0/3.0 respectively? I know I'm missing something very obvious here but damned if I can figure it out.

The memory bandwidth you're thinking of is between the GPU and its dedicated memory on the card. Almost everything the GPU needs to do its job is kept in local memory.

PCI Express is only used to move commands and data between the host system and the GPU. That's why it's not an issue most of the time, but a squeeze on PCIe bandwidth can cause sudden and dramatic but transient framerate drops if the card has to stream a bunch of new models and textures into memory ASAP before they're needed to render a scene.

e: way back in the day, the original idea behind AGP was to develop something like a shared-memory architecture between the GPU and CPU - the GPU would have a dedicated "accelerated graphics port" connecting to system memory at very high speed, instead of going over the slower shared PCI bus. Intel even came out with a video card, the i740, which only had enough local memory for a framebuffer; it streamed everything else over AGP. Unfortunately, performance was awful. Everybody else in the market realized that if you were going to go to the expense of building a separate board, it was worth giving it dedicated memory, and using the fast connection to move data onto the card. The i740 architecture got recycled into some disappointing integrated-graphics solutions, and the only high-performance systems that use shared memory are game consoles that just use a big chunk of high-bandwidth graphics memory for everything.

Space Gopher fucked around with this message at 07:10 on Jan 17, 2017

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

ChesterJT posted:

Well duh, that makes sense now. Thank you! So is there a good way for me to pick a higher end card that will still perform well but won't cause those issues? Certain features/abilities of the card to avoid? Or is it more about how it's being used at the time by software?

More VRAM is better. Don't get tricked by low-end cards that pack on a bunch of terrible slow RAM, of course. After that, it's pretty much up to the software.

If you play games which fit everything they need in VRAM, then no worries. Most anything that will run well on a Core 2 Duo and old chipset without slamming into a processor bottleneck will probably fall into this category.

If you play an open-world game that needs to stream in complex models and big textures constantly, then you can probably expect regular hiccups.

sauer kraut posted:

Memory bandwidth is not really an issue that's talked about this generation, especially since the demise of multisample anti-aliasing (MSAA) in favor of shader solutions.
The last card that could run into problems was the 4GB Nvidia 960 with meagre 128bits, but even then only at high resolutions or with MSAA enabled.

It's not about on-card memory bandwidth, though; it's about PCIe bandwidth. That's almost never an issue with modern PCIe revisions, but running a modern card at PCIe 1.1 speeds, it's reasonable to expect some impact.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Combat Pretzel posted:

That's weird. Because the video part of HDMI is pretty much DVI-D 1:1 (as in physically), just in a different connector.

Only up to 1920x1200 at 60Hz. After that, HDMI starts running faster signals down a single channel, while DVI runs two channels in parallel.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.
Back in the late 90s and early 2000s, the molex setup for extra power was pretty common, but it was also a Wild West situation. Power supplies only expected to run a few hard drives or optical drives over those connections, and in the days of multi-rail PSUs, balancing power draw was a real pain.

So, the six- and eight-pin power connectors were designed as part of the PCI Express standard to fix those issues - they have a defined power draw of either 75W or 150W that both sides can depend on (fun fact: the extra wires on the 8-pin connector don't carry any extra current, they're just there to say "yes it's ok to pull double power"). Any PCIe card can use them for extra power, although realistically only power-heavy coprocessors video cards, compute-only GPUs, and oddballs like Xeon Phi need to go past the 75W you can get from the socket.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Encrypted posted:

Always disable hardware acceleration in those launchers or other windows that are on your screen as they will always give you noticeable frame drops if you are trying to have smooth 120fps in games.

It's not really lowered performance but just the game become somewhat choppy or 'unsmooth' even if you are getting high enough fps.

This is nonsense.

Every window in Windows is hardware accelerated. Even if it's rendering itself in software, the OS just copies that to a texture before it renders the desktop using the same calls and techniques used for 3D scenes and games. A window is a two triangle textured surface. Windows has done this since Vista. MacOS and every modern Linux window management system do the same thing.

It's possible there are some really terrible launchers out there that might have a performance impact. But, as a general rule, you're not saving any resources. Your "increased smoothness" is a placebo.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Taima posted:

Could Microsoft allow Windows to run on the new console? I assume the architecture is ultimately very similar? That would be a boss move.

I'm already all-in on the high-end PS5 Pro or whatever but that would at least make me think about switching.

It will, beyond a shadow of a doubt, run Windows, just like the XB1 has done since day one. At this point, basically every piece of Microsoft-branded computing hardware, from HoloLens to Surface Hub, runs some kind of Windows variant.

They aren't ever going to give you the desktop UI, Win32, or the ability to run arbitrary binaries at all, though.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Riflen posted:

The PC version was badly made. https://sherief.fyi/post/arkham-quixote/

"TL;DR turns out the streaming system keeps trying to create thousands of new textures instead of recycling them, among other issues."

That's not to say that such bad practices will not be happening in the future, but it has nothing to do with the capabilities of the PC platform or the APIs.

I haven't done console development, but this feels a lot like an operation that would be much faster on a shared memory architecture. If you have shared memory, creating those texture resources within the same memory space is likely going to be a lot faster and easier than trying to shuffle a bunch of data out of main memory, across PCIe, and into the GPU's local memory.

For this one game, it's possible to patch in texture pools and get some performance benefits. But, it's easy to come up with situations where that might not be true. Just increasing the texture variety - and decreasing what's possible to throw into the pool and reuse - could do it.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Taima posted:

It's not just HDMI 2.1 people waiting for this. Ray tracing needs to get less intensive immediately. The Turing cards (of which I own a 2080 so no bias here) will become a footnote. At best a step towards something good, and at worst a failure. A true low point in graphics card history. Let's move on to something better.

People have been trying to figure out how to make ray tracing less intensive for decades. Nobody's come up with anything substantive yet. The best we've got is limited scope plus really fast silicon.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

CaptainSarcastic posted:

I guess I just don't see whatever Apple is doing as being particularly relevant to GPUs. They have been primarily a phone company for years now, and their market share among desktops and laptops is less than 10% of the market. They're a mobile company that still has a niche position in actual computers. Given where they actually make their money and have actual market share it just seems like the news about them going to their own ARM processors for Macs has been vastly overblown, and I have a hard time not expecting another massive mistake like they've made in the past in their decisions about their computers.

Apple has basically committed to going from a mobile part to a discrete-class GPU capable of performance better than the current AMD Mac/Macbook Pro options in the space of a couple of years. If they don't, then they're going to have a big stain on their new architecture initiative. Worst case, they'd have to stick with x86 on their highest-performance systems, which would be a real bad look as they sell ARM everywhere else.

They've got a ton of very smart people, near-infinite cash, and they've successfully pulled off more than one big architecture changeover before. It didn't take them very long to go from starting to build their own silicon to top-of-the-line in the mobile space. On the other hand, building a high-performance discrete GPU architecture is something Intel's tried and failed to do more than once - and those failures didn't happen for lack of resources or smart people.

No matter what happens, it's going to be an interesting show.

Space Gopher fucked around with this message at 07:10 on Jul 24, 2020

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

FuturePastNow posted:

Seems to me the worst case scenario if they fail to deliver high graphics performance is that they keep using discrete GPUs in their "Pro" systems. At least for a while longer. I know they did not include the AMD driver in the arm version of MacOS, but that's comparatively easy to change. Using a graphics card isn't a x86 thing.

You're right that the more realistic, non-worst-case failure scenario is falling back to discrete AMD hardware and shipping it in Apple CPU-equipped systems.

That would still be a clusterfuck, though. Well-optimized drivers for a new architecture aren't going to pop up out of nowhere. If they have a team on it right now, they're not putting their best and brightest on Project "either this is throwaway work, or it proves our senior execs wrong." Having to shift gears to support discrete non-Apple GPUs would be a mess that would probably mean delays to moving the pro hardware onto ARM, and all kinds of performance issues thanks to a last-minute changeover.

And, it's Apple. They've got a good track record overall, but they've made some really boneheaded and stubborn technology choices before.

So, in the end, they'll either be a new player in the high-end GPU market capable of performance on par with AMD and Nvidia (thanks to PowerVR coming back from the wilderness, even!), or the whole project will melt down in an interesting way. No matter what happens, it'll be fun to watch.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

SwissArmyDruid posted:

Ladies and gentlegoons of the jury, I rest my case.

Bring on the loving gigantic OLED monitors. What's worse is that even Vizio, and brand that basically made their name on dumb TVs that they couldn't call TVs because they didn't have tuners in them and had to call them "displays" instead, doesn't have a single dumb model.

It's literally cheaper to make a smart TV than a dumb TV. The parts to run the "smart" system aren't any more expensive than their "dumb" equivalents. The TV manufacturer can sell ad space on the smart version.

It'd be nice if you could pay a bit of extra money for a no-ads version, but that's not what the market wants, I guess.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Cactus posted:

My 970 ran real hot playing Grounded the other night, to the point my room heated up to an uncomfortable degree even though I had all windows open. I'm assuming it's because that is a really old card and modern games even on default modest settings are now pushing it to its limit. 3080 cant come out soon enough.

GPUs typically put out close-to-full heat in games and 3D applications even at low settings, because when each frame is easier to make, they will make more frames (and give you a higher framerate) rather than drop their power consumption. They're also typically not as good as CPUs at quick power state transitions, low idle consumption, and race-to-sleep - that's why laptops usually run switchable graphics.

There's some variance in power consumption based on exactly what the GPU is doing, but you can expect pretty high power consumption from a high-end desktop GPU in any 3D application, whether you're getting a zillion FPS in Quake 3 or struggling to run Control maxed out.

Based on past trends, the 3080 will probably be worse as a room heater than your 970 even if it's got a great cooler and the GPU runs at a lower die temperature. GPUs do zero work and follow the laws of thermodynamics; every picowatt that comes in as electricity has to leave as heat.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

LRADIKAL posted:

It has more to do with software availability and optimization and integration of said software with the available hardware. i.e. if all the best producers with the most money buy macs and use a particular piece of mac software, and the best hardware add-ons are mac compatible, then you end up with the best production tool chain in spite of potentially "worse" hardware.

This is spot on.

The "Macs are better for graphics" era definitely existed, but it was in the late 80s to the mid 90s, and it was never really a gaming or performance thing. PC graphics hardware was a mess of incompatible standards with weird performance and feature gaps, and the limited Apple hardware set made for a comparatively easy and stable target. Photoshop and PageMaker (now InDesign) started as Mac-only products, and even after the Windows ports came out, Mac users were first-class citizens lording it over the Windows folks for quite a while. Mac OS's font handling was also way better than anything you'd get on DOS or Windows for a long, long time - not a big deal for most users, but essential for anybody trying to lay a page out to exact picas and points. If you were serious about any kind of print work, a Mac was absolutely necessary.

By the time consumer 3D cards started to become commonplace in the late 1990s and early 2000s, though, Apple's hardware was nothing special, and the pro graphics situation on Windows made it to close-enough feature parity. Windows' own font handling stayed garbage for a long time, but any applications for people who cared included their own rendering engine and solved the problems themselves. At that point Apple was mostly coasting on the momentum of designers and others who'd learned to work on their software and didn't want to change.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

VelociBacon posted:

2k to buy a PC with a monitor and peripherals is like a 60hz i5 setup no?

No.

2k will get you a Ryzen 3700X, a midrange X570 or B550 motherboard, 32 gigs RAM, a 1TB SSD, an FE 2070 Super, an LG 27GL850 (144hz IPS G-sync 2560x1440), and enough room left in the budget for a decent case, good power supply, acceptable keyboard and mouse, and a grey-market copy of Windows.

Right now, the upcoming AMD and Nvidia launches make it even smarter than usual to hold off buying unless something's forcing your hand, but $2k can buy you a whole lot of computer - and $12k is either a healthy chunk of emergency fund if the economy keeps going to poo poo, or a loving incredible vacation if it comes back.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

MonkeyLibFront posted:

I'm not going to financially recover from the 3080ti 😬

Don't wreck your savings or go into debt for computer parts.

Turn a few settings down if you have to; you'll still have just as much fun as everybody else playing the game.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

shrike82 posted:

https://twitter.com/GarnetSunset/status/1296916731378704391?s=20

those are some big jumps at the entry/mid-level tiers

so, assuming that this guy is legit and not just throwing random three and four digit numbers next to each other -

if the 3090 is $1400, then what is that $2000 absolute unit of a card supposed to be?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

DrDork posted:

No one's "cold" on the 30-series right now. Some people aren't happy with the pricing that's been rumored, but all that'll end up meaning in most cases is that they'll buy one step down to what they've been accustomed to. They'll still buy something. It's not like the 20-series launch where a lot of people just didn't see a reason to upgrade to anything.

:shrug: Speak for yourself, I guess.

The most plausible scenario right now seems to be:

- The absolute top-end "price is no object" tier gets a substantial boost in both price and performance. Important if you're chasing the 4k high refresh rate dragon and have money to burn in 2020, but that's not a lot of people

- Price/performance through the rest of the range improves a bit but not dramatically. Someone who's unhappy with their current performance and thinking about buying might go for it; whatever the 3060 ends up looking like, it'll be a big boost for the person who's been holding on to a 970. But, if their general attitude towards their current system is "eh, my current card is mostly OK, and the 2000-series doesn't give me enough of a boost that I want to spend the money," then the 3000-series isn't likely to present a much bigger case

- Features like raytracing and DLSS move down the stack into lower-priced products. With some of the DLSS 2.0 results at lower resolutions, this could be a long-term game changer - but right now DLSS 2.0 is only supported by a tiny handful of titles. It's not a super compelling reason to upgrade yet

Meanwhile, the one big-name/big-graphics release that might drive upgrades is Cyberpunk 2077, which isn't going to hit its release date for months, and has already seen several delays. It's also targeting XB1/PS4, so odds are it'll scale well onto lower-end hardware.

People will continue to upgrade, but presenting it as "everybody's going to buy something" is really overestimating how much people will be driven to upgrade what they've got.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Taima posted:

It's actually not a substantial price increase for the top end. The 2080Ti was $1200 minimum, and we're being quoted $1299 minimum with FE bump to $1399. That's immaterial for that specific market, and the conditions for that market are actually better than ever for multiple reasons. It's possible that prices might settle higher, but we will have to see. In any case the point remains that the top market doesn't care.

Re: price/perf through the rest of the range, there's basically no reliable data on price/performance, especially for anything below a 3080, so I would definitely hold off on that assessment. Please tell me you're not relying on that "comprehensive benchmark" image where all the cards are supposedly compared because that is insanely fake.

And re: DLSS2 I respectfully disagree. I get that people were burned on DLSS1.0 but only intensive titles need DLSS, not every title. Nvidia is going to be reaching out to everybody to get DLSS in their game, and they have way more impetus to do that then they ever have. Who is making a GPU intensive game in 2020+ and doesn't want an absolute shitload of free performance? I get your reluctance but DLSS is the most magical single technology in... honestly I don't even know what the last comparable tech is. This thing is fully happening.

Sure you have titles like Horizon where they seemingly rejected DLSS on purpose because AMD was sponsoring it, but saying that it bit them in the rear end is an understatement. That game would have been received completely differently if it had the headroom of DLSS. If anything it's a major warning to publishers to include that poo poo, especially when Death Stranding released at the same time and ran on a potato as long as it had DLSS.

Yes, the absolute top end customers will buy whatever comes out. If Jensen comes out with a 2080Ti, scribbles "3090" on it in Sharpie, and announces a $2000 price tag, they'll buy it to replace their 2080Tis. They're kind of irrelevant to the discussion of whether the rest of us are going to be convinced to upgrade. Also, "we're being quoted $1299 minimum" doesn't mean much, given what we already know about volatility in pricing strategies leading up to launch.

What we know for sure (barring some very elaborate fakery) is that there's some kind of monster-sized card with a couple of crazy custom PCBs, an expensive and elaborate cooler, and even a new high-current power connector, that's probably being put into the "gaming" product line rather than the pro/datacenter-targeted parts. We don't know much about the specifics of the product lineup, but it's most plausible that it's targeted above the current 2080Ti tier as an extreme halo part. So - super high performance, super high pricetag, probably irrelevant outside of that very narrow niche.

For the normal-people range, I'm not basing anything on whatever probably-faked "comprehensive benchmark" you're talking about, just market conditions. Nvidia has very little competition and no reason to offer massive price cuts or performance boosts at the same pricing tier. We haven't seen any indication that they'll do otherwise. It's possible they'll surprise us, but that's why I said "most plausible scenario" instead of "absolutely guaranteed to happen."

Finally, for DLSS, I actually agree with you. It's very impressive technology that is likely to be relevant in the future, and prolong the lifespan of current cards that support it (although we'll probably see all kinds of caveats as adoption increases). Right now, though, it's only marginally relevant. The current marquee support is basically just Control (OK, high requirements, although it does scale down nicely) and Death Stranding (yes, it's a DLSS showcase, but the dirty secret is that it'll run well and look good on a non-DLSS potato, too). So - once everything starts coming out with DLSS support, then it'll be a good time to upgrade for DLSS.

In the mean time, it's not a reason to rush out on launch day. I'm still not seeing any good reason to do that, unless you've already figured out that you're going to be upgrading because what you've got now isn't enough.

(super hot take: there's a good chance that, in retrospect, we're going to see the 1000-high-end-2000 jump as the big one, because it introduced new long-term features like DLSS, and the 2000-3000 jump as comparatively minor)

Some Goon posted:

so unless you're a real loathsome type that can't stand the idea that a console can rival your computer your GPU needs are still going to be dictated by your monitor.

I mean, we are talking about a hobby where a good chunk of people seem to think that "we are the master race" jokes are the height of comedy, sooooo....

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

DrDork posted:

Let's just all agree we're bored and spitballing various hypothetical situations, guys, and that we all want new cards because if we didn't we wouldn't be poasting in this dumb thread.

I literally don't want a new card. I'm not spending new-GPU money in 2020 when what I've got works just fine.

I'm just here to :munch: at the drama, when we've got all of this happening, basically at the same time:

- Nvidia launches Ampere with some ludicrous high-end parts that probably won't even fit in most cases, and probably prices to match
- AMD desperately tries to get back on track with Big Navi
- Intel takes yet another swing at discrete graphics, possibly with a sick shroud design. Maybe the card will make it to market this time!
- Apple transitions to their own silicon on the desktop and probably tries to launch their own high-performance GPU

MikeC posted:

The reason why the hype train is reaching the ridiculous levels it is at now is thanks to Cyberpunk featuring RT and DLSS. For a lot of gamers (myself included), this is a must-have day 1 title and sure as hell want my 60-90 FPS with RT on when I play that. Big Navi might arrive in time to compete but hell, I am going back to Team Green if thats what it takes.

Funny, I thought everyone who was buying hardware for day one sick graphics in Cyberpunk would have already picked up a 2080Ti for that April release. Maybe even one of those co-branded promotional cards.

Preordering games is silly, preordering singleplayer games is especially silly, and dropping a bunch of bills on hardware for your preorder is just ludicrous.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

mobby_6kl posted:

Yeah the car is sitting at 0km/h. DLSS is clearly blurrier in these stills, just open the two images in two tabs and swatch between then. I'm sure it wouldn't be noticeable when actually driving though, unless the DLSS shits itself in motion somehow and produces worse results.

Given how DLSS works - basically, by integrating details across several frames, but in a smarter way than TAA - it's likely to not look as good in motion compared to how it looks at rest, or compared to native rendering, but DLSS will probably still look better than TAA.

Go back and check out those super-low-res Control videos to get a more obvious sense of how this works. If the camera stays on a given low-motion subject for even a few frames, DLSS is using every frame it can see the subject to shift the rendering and find more detail. It can get fine texture detail quickly even from a handful of low-res source images, because it can basically say "I'm missing a bit of information, please shift the image a quarter pixel to the left and down a touch." That's how it can do "better than native" results. But, when the camera swings around quickly, or there's something like a fire, explosion, or crazy sci-fi effect happening on screen with a lot of rapidly-changing detail, DLSS just doesn't have enough information to do much more than a good single-frame upscale.

The good news is that, when there's a lot of rapid motion happening, your eyes and brain also aren't great at noticing and picking out fine detail. So, the overall effect stays pretty good as long as you're not trying to upscale a sub-DVD-resolution source image to 4K.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

VelociBacon posted:

PC GPU marketing is bad because the market demographic are idiots. They wouldn't market like this otherwise.

Spamming consumers with unnecessarily complex product lines is a great way to lock people who try to do their research into your brand.

Say I want a video card. I have a rough idea that it should be a 30XX, but there are a ton of options - if I go with Asus, do I want the TUF, Dual, ROG STRIX, Turbo EVO, Phoenix... ? I'd better bust out the comparison charts and figure it out, which means, whatever I end up buying, I'm going to go home with an Asus card.

Take a look at Dyson's vacuum cleaner lineup for a masterpiece of this kind of marketing, in an industry that's only tangentially related to GPUs now that most video cards don't have blowers any more. By getting you to think about a purchase in terms of "what product from this brand do I want," they lock their competition out of your decision.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Mikojan posted:

Wait aren’t these FE coolers usually trash?

Would it not be wise to atleast wait for acoustic benches?

That was the old conventional wisdom, because they were blowers, because nvidia was still pushing SLI setups. The one place where normal open-air multi-fan coolers don't work well is when cards are jammed up against each other, so they had to ship coolers that weren't good for most people to avoid "nvidia's premium cards don't work in nvidia's top of the line setup" media attention.

The FE coolers have been fine for 20xx and up.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Dr. Fishopolis posted:

I don't think anyone has ever been owned so hard in history than 2080ti buyers.

https://en.m.wikipedia.org/wiki/GeForce_FX_series

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

sean10mm posted:

What's the point of having a big hoopla launch event if you won't have poo poo to sell?

It's one thing to have problems with 3090 yields limiting it to a trickle at first, but if they can't even manufacture a 3070 it's really a joke.

If AMD is going to release cards that at least compete at the 20xx level, and they’re working with suppliers who can actually supply, then this means Nvidia can get out in front of them with a product that people will wait for.

Adbot
ADBOT LOVES YOU

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

exquisite tea posted:

Games have two settings: Ultra or Trash Bin.

As a member of the PC gaming master race what differentiates me from the console peasants is my well-honed skill at tweaking, benchmarking, customizing, and optimizing my system for maximum performance.

opens settings, slams every slider and option to “ultra extreme.” the fps counter occasionally dips below 60

Stupid unoptimized piece of poo poo! Guess I have no choice but to buy a new video card now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply