|
Darkpriest667 posted:A G pentium on the haswell isn't made for gaming it's made for grandma who wants to surf her facebook and knitting sites. At the end of the day, isn't a Haswell a Haswell? The G3258 was made specifically for enthusiasts and is definitely a value when you can get one with a Z97 mobo for $99 total. I don't know why we're making GBS threads all over it, a G3258 + Z97 mobo +R9-290 is basically the best way to spend ~$300.
|
# ? Apr 21, 2015 01:45 |
|
|
# ? Apr 27, 2024 02:11 |
Darkpriest667 posted:I doubt your IPS clock is 4.6 Ghz I'd like to see a CPU-Z Verification link. That being said, even if it is a Haswell dual core pentium that clocks to 4.6 GHz (which means you're using some hefty aftermarket cooling) you need a dual core with HT or a quad core at least to do good texture loading on modern video games. While the main process is still on thread 0 most of the secondary processes are pushed to threads 1-3 and on AMD octo or Intel HT quads to threads 3-7. I'd highly recommend putting some cash into a 4670k or if you seriously do give a poo poo about overclocking a 4790k (Devils Canyon which clocks like a MFer compared to Ivy and other Haswells) Is 4.6 really high? My i7 2600k is running at 4.6 on air cooling, granted with a pretty decent heat sink and it only hits 72C when stress testing too.
|
|
# ? Apr 21, 2015 01:52 |
|
Darkpriest667 posted:Not on stock cooling, which is what I said, if he is running a 4.6 on custom loop or water cooling why the hell is he running a 50 dollar processor. It makes no damned sense. All of his other components are mid to high end. But his processor is low end because he wants to clock it high and then he bitches about stuttering? That's stupidity. Buy a loving processor that isn't poo poo for actually doing things besides clocking high. A G pentium on the haswell isn't made for gaming it's made for grandma who wants to surf her facebook and knitting sites and a few overclockers who are looking to post scores on hwbot. You don't need a water loop to hit 4.6 and above on a G3258. Most people use the cheap hyper 212 plus.
|
# ? Apr 21, 2015 01:53 |
|
Are you seriously trying to play GTAV on a dual core? GTAV pegs all four cores of my 3570K @ 4Ghz at 100% the entire time I'm playing.
|
# ? Apr 21, 2015 01:54 |
|
Twerk from Home posted:At the end of the day, isn't a Haswell a Haswell? The G3258 was made specifically for enthusiasts and is definitely a value when you can get one with a Z97 mobo for $99 total. I don't know why we're making GBS threads all over it, a G3258 + Z97 mobo +R9-290 is basically the best way to spend ~$300. It's a great deal but it's also a gamble, some games simply won't run on it since they were designed to only run on processors that can handle 4 threads.
|
# ? Apr 21, 2015 01:54 |
|
Darkpriest667 posted:I know reading is hard but your CPU would be the bottleneck and a 970 might be able to bottleneck yours. I said it wasnt the storage because it's an SSD not an HDD. Water cooling isn't expensive at all if you bargain shop. I got a Nepton 140XL AIO for $40 AR over the holidays. A G3258 can be had for as cheap as $40. With those numbers in mind, there are very few CPU+cooler options in the $80 range that overclock as well as a liquid-cooled G3258. Core for core it essentially keeps up with a stock-clocked 4770K (which obviously does have more physical/logical cores) in FPS and has very low stutter (99th percentile frame times / cumulative frametime above 16ms). Yeah, you can save $15 or so by air cooling but you buy the cooler and then you have it. In games that aren't highly threaded it's a pretty great bang for the buck. In games that can fully load up all your cores, exploit hyperthreading, etc, it falls short. But it's not really an irrational choice if you're willing to tinker, and if you upgrade down the line you can drop your new CPU in and you have a good cooler and stuff. With the Z97 combo Microcenter was running it used to be "buy the mobo, get a free CPU" and it was fantastic in the context of being a starter CPU to get your foot in the door with LGA1150. Paul MaudDib fucked around with this message at 02:43 on Apr 21, 2015 |
# ? Apr 21, 2015 02:17 |
|
BadAstronaut posted:So I am guessing from everything on this page that the GeForce GTX 970 really is the only choice at the moment - this still hold true if I am upgrading to a 2560 x 1440 monitor? (I currently have 1920x1200 powered by a GTX 760Ti). If you can find an open box 970 that's probably the best bang-for-buck right now. Lots of them floating around right now because of the 3.5GB blowback. In the US they're down around $275 all the time on Newegg. There's actually enough that there's a decent choice of models/brands. You don't get game codes and warranty is up to the manufacturer to decide, though. In an embarrassing display of self control I blew $430 of my tax return on an open box Asus Strix 980 last week. I bought a 4K monitor two months ago and I've been holding out for a 390X but I just don't think they'll be here before late 3Q/early 4Q and I'm tired of waiting. 1440P and 4K are the places where the AMD cards do shine though. You can pick up open box 290Xs for $275 now which is a pretty great deal and will do well in those scenarios I'm still thinking real hard about picking up a 295x2 for a mITX build for 4k. I like the idea of SLI 980s but without the liquid cooler a mITX case would probably heat up pretty bad. Custom water loops make the cost differential much worse, and also . 980ti or Titan X are nice halo options but I can't possibly justify spending more than about $800 on a pair of cards. For now I'm watching 295x2 prices and seeing what the 390X announcement will look like in 6 weeks. Paul MaudDib fucked around with this message at 02:41 on Apr 21, 2015 |
# ? Apr 21, 2015 02:24 |
|
Darkpriest667 posted:Odds are your stuttering issue is either your Storage (HDD definitely not an SSD unless its a lovely one) or your CPU is bottlenecked by the card (which is what I think it is) Darkpriest667 posted:I know reading is hard but your CPU would be the bottleneck and a 970 might be able to bottleneck yours. I said it wasnt the storage because it's an SSD not an HDD. I'm trying to parse these comments in context and it's just not adding up. CPU is the bottleneck, but it's bottlenecked by the card? Wouldn't that make the card the bottleneck and not the CPU? The HDD is definitely not a SSD, except it is a SSD not HDD?
|
# ? Apr 21, 2015 04:31 |
isndl posted:I'm trying to parse these comments in context and it's just not adding up. CPU is the bottleneck, but it's bottlenecked by the card? Wouldn't that make the card the bottleneck and not the CPU? The HDD is definitely not a SSD, except it is a SSD not HDD? They are saying that the GPU gives the CPU too much to process causing it to backup. Of course last I looked the CPU sends data to the GPU which then processes it to create frames that are then output to the monitor, but I don't know if that is entirely correct. In any case the HDD/SSD thing was just awkwardly phrased, they are saying that a disk based HDD could cause stutter but that a SSD type HDD would not cause stuttering. I tend to agree that a two core CPU could cause issues if the software is designed with two or more threads in mind, a situation that is very common these days.
|
|
# ? Apr 21, 2015 04:49 |
|
Twerk from Home posted:At the end of the day, isn't a Haswell a Haswell? The G3258 was made specifically for enthusiasts and is definitely a value when you can get one with a Z97 mobo for $99 total. I don't know why we're making GBS threads all over it, a G3258 + Z97 mobo +R9-290 is basically the best way to spend ~$300. It was a great value deal, but since then, some games have launched that simply do not launch without a 4 thread cpu..
|
# ? Apr 21, 2015 06:27 |
|
HalloKitty posted:It was a great value deal, but since then, some games have launched that simply do not launch without a 4 thread cpu.. like what? is this because consoles are multi-core now?
|
# ? Apr 21, 2015 06:54 |
|
Fauxtool posted:like what? is this because consoles are multi-core now? More like AAA titles are targeted at consoles first, then ported to PC, instead of the other way around.
|
# ? Apr 21, 2015 07:01 |
|
SwissArmyDruid posted:More like AAA titles are targeted at consoles first, then ported to PC, instead of the other way around. I understand how consoles can run nice looking games while having lower end specs due to having consistent hardware that makes it easier to squeeze every bit of power out. For PCs is it easier to just overuse the CPU on a port than to optimize it to run both on the GPU and CPU somewhat equally? Do the recommended specs on the games reflect the higher CPU usage at least? edit: please explain like im stupid Fauxtool fucked around with this message at 07:08 on Apr 21, 2015 |
# ? Apr 21, 2015 07:04 |
|
I might be alone on this but I feel like I JUST realized that there are sort of "advantages" of owning a single slot card. Say you were unfortunate enough to get an HP Pavilion 500, but you want to make some sweet headshots, make montage parodies, and get called out on making them by high school contrarians. Now, of course the HP has a PCI-E x16 slot, so things should seem peachy. However, HP hosed up and it looks like that additional cooler on a dual-slot card is gonna be blocked off because the PCI-E x16 slot is at the bottom! Uh oh! Of course, it seems like something like this would seem to do well with that. HP's case design almost always forces you to have to use a single slot card on their Pavilion and some of their Envy models. Would the only real disadvantages to a single slot card be simpler cooling?
|
# ? Apr 21, 2015 07:22 |
|
Fauxtool posted:I understand how consoles can run nice looking games while having lower end specs due to having consistent hardware that makes it easier to squeeze every bit of power out. Well, there are different reasons for different games, it seems. For example, Far Cry 4 was coded such that it very specifically wants to run its main thread on the third CPU. If you've only got two, no dice. edit: There's really no good reason for a game to not work at all because there are only two processors available. Any multithreaded program should work on a system with any number of processors, even one, although it might slow down a lot. I feel like ever since early 2014 or so there's been a trend of AAA PC ports demanding a lot of hardware, more than seems right. It's possible that modern AAA games are genuinely doing a lot more work under the hood. But the evidence seems to suggest that a lot of games are just poorly coded to begin with, or get that way when they're ported. Yaoi Gagarin fucked around with this message at 07:43 on Apr 21, 2015 |
# ? Apr 21, 2015 07:29 |
|
Fauxtool posted:I understand how consoles can run nice looking games while having lower end specs due to having consistent hardware that makes it easier to squeeze every bit of power out. It's brute force. AlphaXires posted:I might be alone on this but I feel like I JUST realized that there are sort of "advantages" of owning a single slot card. Say you were unfortunate enough to get an HP Pavilion 500, but you want to make some sweet headshots, make montage parodies, and get called out on making them by high school contrarians. Now, of course the HP has a PCI-E x16 slot, so things should seem peachy. It shouldn't be a problem. When the board is oriented in the normal fashion, a graphics card hangs down with the cooler underneath. With an inverted board, your cooler will now be on top. Temps might be higher, but I don't think it will be impossible to use your standard dual-slot cooler.
|
# ? Apr 21, 2015 08:19 |
|
I haven't been able to play GTA V reliably on PC so far because of a weird error that I'm getting, it crashes the game and comes up with a message titled "ERR_GFX_D3D_INIT" and it says something about failed initialisation. I've googled it and seems like theres a steam thread and a rockstar support thread about it but the only suggestions are to downclock your card - which I tried with no luck - and reinstall the game, also with no luck. It's fairly frustrating, since it has GFX in the error I'm assuming its graphics related but I have no idea what it would be. It's happening on my i5 3570k with an Asus 980. Anyone have any suggestions at all?
|
# ? Apr 21, 2015 10:09 |
|
Man these giant Korean 4K monitors seem to be the new big thing. This 39" one claims to do 144hz at lower resolution in game mode. There is even a crazy 48" one for $999.
|
# ? Apr 21, 2015 11:01 |
|
Unfortunately, 4k seems to be 30hz only :<
|
# ? Apr 21, 2015 11:34 |
|
Fauxtool posted:like what? is this because consoles are multi-core now? Far Cry 4 (although someone made a hack for this I believe) Call of Duty: Advanced Warfare (although there's probably a hack for that too) Maybe some more, I'm not sure.. I have a feeling this trend won't just disappear, seeing as companies like Ubisoft and EA disrespect PC gamers (and their customers in general) at every turn
|
# ? Apr 21, 2015 13:09 |
|
Isn't this the exact thing people want? Why shouldn't games take advantage of multicore cpus? And if compatability is an issue, how many people even use stuff like a single/dual core cpu?
|
# ? Apr 21, 2015 14:22 |
|
HalloKitty posted:Far Cry 4 (although someone made a hack for this I believe) Dragon Age: Inquisition
|
# ? Apr 21, 2015 14:27 |
|
Sistergodiva posted:Isn't this the exact thing people want? Why shouldn't games take advantage of multicore cpus? Principle. Especially because you can poll for features and capabilities. It's one thing to say "yeah we tried to stack our workload properly but you just don't have the headroom, sorry dude". It's another to say "we're specifically going to ignore those n perfectly functional processor cores that are big enough to handle our workload for reasons that have absolutely nothing to do with these bulging burlap sacks with silent-movie dollar signs on them or our barely-veiled spite for people who don't buy the most riced out tacticlol computer hardware available". Or hell, maybe they were just lazy, we don't know. Point is, the people who do those kinds of things don't deserve to be in their line of work. Which is a really sad statement, given the state of video game development. dont be mean to me fucked around with this message at 14:44 on Apr 21, 2015 |
# ? Apr 21, 2015 14:41 |
|
Sistergodiva posted:Isn't this the exact thing people want? Why shouldn't games take advantage of multicore cpus? I'd say that 80% of laptop users are on dual core CPUs. Unless it has "MQ" in the model number, it's a dual core.
|
# ? Apr 21, 2015 14:51 |
|
Whoops, didn't mean to start a two-page fight. My G3258 is on a 140mm refurbished CLC that I picked up from NewEgg for like $25. At 1.35v it runs mostly stable at 4.8 actually, but I had an unknown crash every few days so I toned it down. I actually have a 4690k in a box next to me that I bought at Microcenter just to get the bundled $40 off on a motherboard and I need to sell that CPU on SA Mart because I'm fine to play mostly Dark Souls on the G3258 and hold out for Broadwell-K. Anyway, didn't mean to detail everyone, I was just wondering if my GPU might have had to do with GTAV performance since people were chatting about it.
|
# ? Apr 21, 2015 14:53 |
|
Don't waste money on Broadwell if you already have an unlocked Haswell, if I were you I'd just use the 4690K or hold out for Skylake
|
# ? Apr 21, 2015 15:02 |
|
Lol SH/SC quickly forgets the marvel of the G2358. I've built systems with that chip and 290x's. Its not a dumb idea since in almost every scenario its exactly the same performance as a chip and motherboard costing 8 times as much. That being said, if a game does slam all cores on a 4 core i5 (exceedingly rare so far, but may be more common at DX12), it is likely the bottleneck.
|
# ? Apr 21, 2015 15:17 |
|
Twerk from Home posted:I'd say that 80% of laptop users are on dual core CPUs. Unless it has "MQ" in the model number, it's a dual core. Yeah, but it doesn't matter to games as long as it is 4 thread capable, not 4 core. THE DOG HOUSE posted:That being said, if a game does slam all cores on a 4 core i5 (exceedingly rare so far, but may be more common at DX12), it is likely the bottleneck. That's the second time I've read this with regards to DX12, which seems odd, since the exact opposite should be true - DX12 burdens the CPU less. We didn't forget the marvel of the cheap and overclockable Pentium, but it's basically bad advice not to disclose the fact that some popular games simply will not run on it.
|
# ? Apr 21, 2015 15:53 |
|
I guess the pc gaming community is just very fractured, but it seems like everyone wants new tech to be used but have 100% backwards compatability. Like how people freaked out when the first games required 64bit os.
|
# ? Apr 21, 2015 15:56 |
|
It's like the plight of the Itanium. Cool new tech, but no one wanted to drop x86 and move to a new platform so it died and we had to wait until AMD64 appeared before we got a usable 64-bit platform. Except you're right. For some reason people freaked out when developers wanted to fully utilize the new platform because it was a step away from the platform everyone was used to dealing with. I don't get why backporting new software to an old platform is something people want. Being able to run older x86 software on the latest x64 is great, but even the latest consoles run on x64 so what the gently caress is the point of developing games for 32-bit systems anymore?
|
# ? Apr 21, 2015 16:12 |
|
HalloKitty posted:That's the second time I've read this with regards to DX12, which seems odd, since the exact opposite should be true - DX12 burdens the CPU less. Perhaps it was lower overall CPU overhead per draw call/driver stuff but better multithreading capabilities? With lower driver overhead it doesn't seem like overall CPU use will change, because either you're spending CPU time on driver overhead + game calculations for AI and junk, or you have less driver stuff to worry about so you can run at higher FPS and the CPU has to work harder to keep the GPU fed at that higher frame rate anyhow.
|
# ? Apr 21, 2015 16:13 |
|
Kazinsal posted:It's like the plight of the Itanium. Cool new tech, but no one wanted to drop x86 and move to a new platform so it died and we had to wait until AMD64 appeared before we got a usable 64-bit platform. Yeah, it just feels so weird, how PC MASTER-RACE is all about how much better the games are than consoles and how consoles are dumbing down everything. And now when consoles are the ones actually pushing the big "evil" devs into stuff like 64bit, multithreading and stuff that's aparently something bad. I mean, how many people who are interested in AAA pc games do not have 4 threads and a 64bit OS?
|
# ? Apr 21, 2015 16:39 |
|
HalloKitty posted:Yeah, but it doesn't matter to games as long as it is 4 thread capable, not 4 core. I thought DX12 would burden the CPU more across all cores, with the vague concept being almost all of the CPU threads are nearly wasted as it is. I'm probably wrong though then i dont actually know
|
# ? Apr 21, 2015 16:41 |
|
THE DOG HOUSE posted:I thought DX12 would burden the CPU more across all cores, with the vague concept being almost all of the CPU threads are nearly wasted as it is. I'm probably wrong though then i dont actually know I thought the issue was that CPUs still effectively had a bottleneck when interacting with the GPU, in that only one CPU would be talking to the GPU at any time. By allowing the CPU to use all it's cores when communicating with the GPU, much more can be done in parallel. In theory, any single CPU core is much less burdened, but it opens up a lot of headroom to just crank everything up on as many cores as a CPU has, so I guess the answer is more it'll depend more on the skill of the coders. DX12 might in theory improve performance on older multicore machines, yea, nay?
|
# ? Apr 21, 2015 16:50 |
|
Sistergodiva posted:Yeah, it just feels so weird, how PC MASTER-RACE is all about how much better the games are than consoles and how consoles are dumbing down everything. Well, the only exception is the Pentium G2358, basically. Intel dangled a carrot that was delicious to chase after. Really an unlocked i3 is what's needed to keep that value train going. THE DOG HOUSE posted:I thought DX12 would burden the CPU more across all cores, with the vague concept being almost all of the CPU threads are nearly wasted as it is. I'm probably wrong though then i dont actually know Well, that's not actually wrong, but it's more than just multithreading, there's simply going to be less overhead involved in making draw calls, so it's going to be more efficient overall. So the CPU (if it's not given more work such as AI or physics) in any given game at the same frame rate will be doing less overall work with DX12 or Vulkan than with the older APIs, or at least that's the way I've understood it thus far. vv Well, it might help somewhat with AMD CPUs, but it doesn't hide the fact that they have worse single thread performance whilst chowing down on a lot more power, but if one already has one, bonus! HalloKitty fucked around with this message at 16:59 on Apr 21, 2015 |
# ? Apr 21, 2015 16:54 |
|
Well everyone on reddit is proclaiming that AMD CPUs will be made relevant again with DX12 due to the improved mulit-threading.
|
# ? Apr 21, 2015 16:55 |
|
The vast majority of what DX12 (And Mantle/Vulkan) do is reduce the cost in CPU time of executing driver draw calls, both in total and in terms of splitting the load up (I mean it does a shitload of other stuff too, but this is the part we're talking about). Basically the graph on this page explains the difference: http://anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm Total time spent dealing with the Direct3D part of the stack is reduced, and the user mode driver time is both reduced and split across all 4 threads, which the current APIs can't do. Ragingsheep posted:Well everyone on reddit is proclaiming that AMD CPUs will be made relevant again with DX12 due to the improved mulit-threading. lol yea loving right haha. I mean, yea, it does help AMD cpus because their IPC is so atrocious compared to intel chips that they will bottleneck the videocards less. So an FPS on an AMD CPU might see percentage wise, larger gains from Mantle/DX12, but there's only so far you can go before you're just GPU limited instead, and an Intel chip is still going to get there first.
|
# ? Apr 21, 2015 16:56 |
|
FaustianQ posted:DX12 might in theory improve performance on older multicore machines, yea, nay? It will in theory. In reality, the developers will just ramp up the draw calls instead and your old multicore machine will still run the game like poo poo. And no older games are ever getting ported to dx12, because that'd take
|
# ? Apr 21, 2015 16:57 |
|
Truga posted:In reality, the developers will just ramp up the draw calls instead and your old multicore machine will still run the game like poo poo. And no older games are ever getting ported to dx12, because that'd take Also this
|
# ? Apr 21, 2015 16:59 |
|
|
# ? Apr 27, 2024 02:11 |
|
Yeah I'm very interested in how far they can push it rather than how much it will ease burdens. It can't come soon enough though, I cannot wait.
|
# ? Apr 21, 2015 17:03 |