|
I'd hold off currently on buying a new i7 or i7 system with the intention of playing DX12 games, because there aren't a lot of games yet that use many of DX12's features to begin with, right now. We can't justify additional cores and additional threads unless we see how well specific games (i.e. the games you want to play in the future like Battlefield 1) actually scale with those cores/threads.
|
# ? Jun 28, 2016 09:02 |
|
|
# ? Apr 19, 2024 20:50 |
|
Sidesaddle Cavalry posted:I'd hold off currently on buying a new i7 or i7 system with the intention of playing DX12 games, because there aren't a lot of games yet that use many of DX12's features to begin with, right now. We can't justify additional cores and additional threads unless we see how well specific games (i.e. the games you want to play in the future like Battlefield 1) actually scale with those cores/threads. That's a fair call. I think I was going a bit far with the i7 5820k and also realised (from a very cursory glance) that there were actually no X99 itx motherboards. As I previously said I have a i5 6500, so it's not exactly like I'm in dire needs of an upgrade, so I might give it a bit of time and see what's happening when BF1 is released.
|
# ? Jun 28, 2016 12:05 |
Guni posted:That's a fair call. I think I was going a bit far with the i7 5820k and also realised (from a very cursory glance) that there were actually no X99 itx motherboards. As I previously said I have a i5 6500, so it's not exactly like I'm in dire needs of an upgrade, so I might give it a bit of time and see what's happening when BF1 is released. http://pcpartpicker.com/product/k8KhP6/asrock-motherboard-x99eitxac
|
|
# ? Jun 28, 2016 12:16 |
|
Ah gently caress, that complicates things. Anyway, I shan't poo poo up the thread with my questions any longer. Cheers lads.
|
# ? Jun 28, 2016 12:38 |
|
Good luck with the cooling solution on that one! Also 5820k is all but end of production so it's a pretty perfect time to get it before you're left with a worse performing+overclocking 6800k with a $60 price hike!
|
# ? Jun 28, 2016 15:29 |
|
I decided I didn't really want to wait for what will inevitably be slight percentage gains and built a new system last night around the 6700k. It's funny how long I told myself that my several years old AMD chip was fine because criminy. As an aside, I used Corsair's Bulldog barebones kit because my friend was the product manager for it over there (I bought this from Amazon, though). I think it's a pretty good value; you get a Gigabyte mobo with Intel wifi, an okay cpu water cooler, and a good PSU. The quality is okay, but the included psu cables are pretty mediocre and I need to replace them at some point.
|
# ? Jun 28, 2016 17:04 |
|
Also outside of gaming, having extra cores/threads is also beneficial for Multitasking a little. So you can play games while streaming and other things a little easier when you have a bit more CPU overhead to use. Also hell I have encoded Blurays while playing games on my 6core just fine. Letting CPU cycles go unused is a waste.
|
# ? Jun 28, 2016 17:20 |
EdEddnEddy posted:Also outside of gaming, having extra cores/threads is also beneficial for Multitasking a little. So you can play games while streaming and other things a little easier when you have a bit more CPU overhead to use. Yeah, it's something that benchmarks tend not to reflect, hell, I have encountered systems with enough stuff running in the background that the CPU usage bounces around between 15%-25% when otherwise idling, it makes me think that those extra threads on an i7 will probably get some use when gaming on a system that has actually been in use for a while and has had time for background stuff to accumulate.
|
|
# ? Jun 28, 2016 17:30 |
|
Also remember that the I7's don't just have hyperthreading you also get 33% more cache (1.5 mb per core i5 vs 2 mb per core i7). Probably doesn't matter hugely to most gaming workloads (but it could).
NihilismNow fucked around with this message at 17:45 on Jun 28, 2016 |
# ? Jun 28, 2016 17:39 |
|
Big-socket i7s like the 5820K even get 2.5MB of L3 cache over the mainstream ones.
|
# ? Jun 28, 2016 17:47 |
|
Sidesaddle Cavalry posted:I'd hold off currently on buying a new i7 or i7 system with the intention of playing DX12 games, because there aren't a lot of games yet that use many of DX12's features to begin with, right now. We can't justify additional cores and additional threads unless we see how well specific games (i.e. the games you want to play in the future like Battlefield 1) actually scale with those cores/threads. This is how I felt about it; so I just bought a 6600k. Maybe by the time gains (hopefully) start to show, it'll be kaby lake time. Point made about streaming etc. GRINDCORE MEGGIDO fucked around with this message at 22:29 on Jun 28, 2016 |
# ? Jun 28, 2016 19:26 |
|
wipeout posted:This is how I felt about it; so I just bought a 6600k. Maybe by the time gains (hopefully) start to show, it'll be kaby lake time. As mentioned I've been sitting on the fence for a while. I'm gonna go to Microcenter on Monday and pick up a 6700k and mobo. The better feature set (USB Type-C), the better memory compatibility, and some complaints I read about the fragility of the pins in the socket have convinced me. The single-threaded performance is a bonus as far as I'm concerned.
|
# ? Jun 29, 2016 01:15 |
|
More cores are better, it means I can leave stuff running while still running my Vive without frame drops. A W3690 is still awesome.
|
# ? Jun 29, 2016 10:18 |
|
wipeout posted:This is how I felt about it; so I just bought a 6600k. Maybe by the time gains (hopefully) start to show, it'll be kaby lake time. LOL. I can't say more than that, sorry.
|
# ? Jun 29, 2016 15:04 |
|
SuperDucky posted:LOL. I can't say more than that, sorry. At least Devil's Canyon didn't take an entire year
|
# ? Jun 29, 2016 15:47 |
|
SuperDucky posted:We had this conversation a few pages ago but that is a "ES" i.e., engineering sample, and board compatibility can be wonky. The platform has all kinds of neat stuff like lots of native USB 3.0 ports, ten SATA ports, four full-length PCIe slots, and an M.2 slot.
|
# ? Jul 3, 2016 04:35 |
|
.
sincx fucked around with this message at 05:55 on Mar 23, 2021 |
# ? Jul 3, 2016 11:45 |
|
sincx posted:Looks like I might have to keep using this 2600K until it dies.
|
# ? Jul 3, 2016 13:48 |
|
PBCrunch posted:The "E5-2650v3" I got from ebay seems to work just fine in my Gigabyte X99-UD3P motherboard. The clock speed is down 100 MHz compared to a real 2650v3, and the max turbo frequency is down 200 MHz, but it is still pretty amazing when allowed to perform multi-threaded work. I saw something like 1200% in top when transcoding videos in Handbrake. Even with only two channels of memory, the chip transcodes in less than half the time compared with ye olde X3450 (i5-750 with Hyperthreading). Absolutely, the binning isn't complete until after ES time. Gigabyte and ASRock boards tend to be the best about handling ES for whatever reason. Intel boards don't like them generally because they change the BIOS to be picky after RTM.
|
# ? Jul 4, 2016 00:23 |
|
About L3 Smart Cache and CPU market segmentation - In theory, what percentage of a CPU's L3 Cache is a single core allowed to utilize/borrow from others? Say for example, a 6950X with 25MB of L3 cache. Can a single core within that CPU use all of it? In addition, if all but one of the 6950X's cores are disabled, does the CPU still have 25MB of L3 cache that the remaining core can monopolize? The CPU market segmentation part is where I wonder if limiting the total amount of L3 on the chip is either about artificially limiting performance, or just a practical limitation of each core not being able to use that much L3.
|
# ? Jul 4, 2016 21:20 |
|
Posted this in the overclocked thread, but asking in here cos y'all helpful. Apart from core clocks, what are the advantages of raising cache / uncore clocks on a 6600k? Worth it? I'm running DDR 4000 if that makes any difference. (4.4 @ 1.29v)
|
# ? Jul 4, 2016 22:37 |
|
wipeout posted:Apart from core clocks, what are the advantages of raising cache / uncore clocks on a 6600k? Worth it? Virtually nothing, some extra heat and potential stability issues, an extra GHz might get you 2% more performance.
|
# ? Jul 4, 2016 22:52 |
|
I caved and bought a 5820k. I do enough rendering that it'll be worth it. I got an open-box GA-X99-UD4 for it. Apparently there is a problem with this particular model (and possibly other Gigabyte X99s?) - I got stuck in a boot loop where it would spin up for 5 seconds but no video output, no beeps, etc, then power down for a couple seconds and restart. Might have been why it was returned. For any other intrepid X99-onauts out there, the fix is to do a BIOS flash. Format a USB 2.0 flash drive to FAT32, put the BIOS file on the disk (with no other files) and rename to GIGABYTE.BIN. Put it in the white USB port, power it up, wait for the flashing to stop, then reset. It'll boot and go through a "restoring BIOS" stage, then it should work normally. I'll be doing the final assembly and cable arranging tonight, but it looks good to go at this point. I have a pair of 2x8 GB Geil Potenza 3000 kits for a total of 32 GB of memory (I am aware that combining kits is a crapshoot), plus my 980 Ti Classified, so this should be pretty sweet. I was gonna get another 980 Ti to SLI with it, Microcenter had an open-box MSI Golden Edition for $408. The guy at the back told me it was eligible for their 30%-off-clearance sale. Didn't ring at the register but they overrode it. I asked for the bundle discount on the mobo, and the supervisor went back and conferred with the manager and then decided the GPU wasn't eligible for the clearance after all. Goddamn it, $280 for a 980 Ti would have been a good deal. It's not like that's particularly unfair either since Newegg now has that model for $400, brand new. Oh well, their loss, it's just sitting there depreciating by the day, just like the 780 Tis they're trying to get almost $400 for. The register jockey literally bitched at me about how far they had marked down the motherboard and how getting the bundle discount on an open-box item was soooo unfair to them. You see, it was a $300 motherboard back when it was launched 2 years ago, and they already graciously marked it down to half-price. Like I'm the one who prices open-box items or something. Worst shopping experience I've had in a long while, and I can't even complain or it'll look like sour grapes. Paul MaudDib fucked around with this message at 02:14 on Jul 6, 2016 |
# ? Jul 5, 2016 19:16 |
|
Two mysterious kernel panics with our new v4 Xeons (E5-2640 v4) in just a month so far, a little bit worrisome. First one was extra special:code:
I mention all this just because I recall someone in this thread complaining about problems with v4 xeons, but haven't seen too many complaints googling around the internet at large. Overall of course these new chips are awesome and are giving us 20 cores per hadoop node for the cost and power of 12 not too long ago. Maybe less power
|
# ? Jul 6, 2016 00:34 |
|
Aquila posted:Two mysterious kernel panics with our new v4 Xeons (E5-2640 v4) in just a month so far, a little bit worrisome. First one was extra special: There were some BIOS issues mentioned around launch too, right? I wonder if more changed than is evident between Haswell-EP and Broadwell-EP.
|
# ? Jul 6, 2016 01:49 |
|
Got my 5820k up and running - did a quick Handbrake run and it's roughly twice as fast as my 4690K was there And actually I'm not even clocking up all the way or hitting full utilization - I'm hanging out at 2.97 GHz (90% frequency) and ~85% utilization. Hmph, must be bottlenecking somewhere, or Handbrake isn't using enough threads, or something...
|
# ? Jul 6, 2016 02:01 |
|
Paul MaudDib posted:Got my 5820k up and running - did a quick Handbrake run and it's roughly twice as fast as my 4690K was there It's great isn't it? My 6900k is more than 100% quicker on my handbrake runs than my 3570k @ 4.6 was. I've found the same issue that you report. If you're doing two-pass with a turbo first pass, you will see far less than full CPU utilisation on that pass. I believe some code used in those fast pass settings is not well threaded and so holds up the overall process. On the proper final pass, CPU usage is 100% though.
|
# ? Jul 6, 2016 09:49 |
|
If task manager shows all 8 cpus at 100% on a 4 core hyperthreaded i7, does that mean the performance is effectively the same as if the task had be run on an 8 core non-hyperthread chip of the same specs?
|
# ? Jul 6, 2016 13:50 |
|
Lowen SoDium posted:If task manager shows all 8 cpus at 100% on a 4 core hyperthreaded i7, does that mean the performance is effectively the same as if the task had be run on an 8 core non-hyperthread chip of the same specs? No. A single core doing two tasks will not b as fast as a dedicated core.
|
# ? Jul 6, 2016 13:54 |
|
Don Lapre posted:No. A single core doing two tasks will not b as fast as a dedicated core. I understand that is normally the case, but normally task manager doesn't every show all CPUs at 100% utilization. Usually you only see all the logical cores average out to around 50%. But in this case, they were averaging at almost 100%.
|
# ? Jul 6, 2016 14:01 |
|
Aquila posted:Two mysterious kernel panics with our new v4 Xeons (E5-2640 v4) in just a month so far, a little bit worrisome. First one was extra special: That'd be me, but that was an issue with the board not detecting CPU0 in a DP arrangement that worked flawlessly with Haswells, the Broadwells are still giving us headaches.. I would not be surprised if little bugs like this continue going forward, Intel is getting sloppy because they know they need to perpetuate the idea of Moore's law and its very obvious they're running out of ideas, fast.
|
# ? Jul 6, 2016 14:05 |
|
Lowen SoDium posted:I understand that is normally the case, but normally task manager doesn't every show all CPUs at 100% utilization. Usually you only see all the logical cores average out to around 50%. But in this case, they were averaging at almost 100%. Four of the cores in task manager are effectively "fake" cores and can't do the same amount of work as a real one can. The only thing 100% utilization suggests is that the app would probably be able to take advantage of all 8 real cores and scale up performance as a result.
|
# ? Jul 6, 2016 14:18 |
|
Lowen SoDium posted:I understand that is normally the case, but normally task manager doesn't every show all CPUs at 100% utilization. Usually you only see all the logical cores average out to around 50%. But in this case, they were averaging at almost 100%. HyperThreading allows two threads to share the execution units of one core but it doesn't change the capabilities or number of those units in any way. I don't think it's possible to see anywhere near the full performance of a second core on both threads at once unless you somehow came up with a contrived scenario where none of the same execution units were used across both threads, and even if that's possible it might not do the trick.
|
# ? Jul 6, 2016 14:20 |
|
The specific question he asked was how you measure utilization on the hyperthreads, since it doesn't have any execution units of its own. Do both threads on a core report the same utilization percent or something?
|
# ? Jul 6, 2016 17:15 |
|
Paul MaudDib posted:The specific question he asked was how you measure utilization on the hyperthreads, since it doesn't have any execution units of its own. Do both threads on a core report the same utilization percent or something? I believe the utilization is how much % of the time the decode part of the pipeline is full, but I'm not 100% sure. 100% utilization just means that you have a full input pipeline to the CPU.
|
# ? Jul 6, 2016 17:17 |
|
Paul MaudDib posted:The specific question he asked was how you measure utilization on the hyperthreads, since it doesn't have any execution units of its own. Do both threads on a core report the same utilization percent or something? Twerk from Home posted:I believe the utilization is how much % of the time the decode part of the pipeline is full, but I'm not 100% sure. 100% utilization just means that you have a full input pipeline to the CPU. Yes, this is more of what I was trying to understand. Thank you. Lowen SoDium fucked around with this message at 18:23 on Jul 6, 2016 |
# ? Jul 6, 2016 18:20 |
|
SuperDucky posted:That'd be me, but that was an issue with the board not detecting CPU0 in a DP arrangement that worked flawlessly with Haswells, the Broadwells are still giving us headaches.. I would not be surprised if little bugs like this continue going forward, Intel is getting sloppy because they know they need to perpetuate the idea of Moore's law and its very obvious they're running out of ideas, fast. Intel had a lot of issue with Broadwell, and even more with Skylake it appears. Now Kabby Lake is around the corner and Skylake is only just getting somewhat patched up. Heres hoping that the rumors about Cannon Lake being sort of a 1 sku deal is real and Intel takes a year to polish their drat stuff up before just shoving out new crap for the sake of "New" I'll stick with my 3930K until I really see a reason to upgrade. drat thing is solid at 4.6Ghz. wipeout posted:Posted this in the overclocked thread, but asking in here cos y'all helpful. Isn't 4.4ghz at 1.29v a bit high voltage for a Skylake? Or are they just pigs to OC like Broadwell and even IB has been? Heck I need just 1.3v at max Turbo on my 3930K but most of the time I am using a lot less.
|
# ? Jul 6, 2016 22:53 |
|
I read many are hitting 4.5 at that voltage, at least in other forums I was reading. But I think it's pretty average for these. Not sure if the ram speed is limiting it, probably not helping. It's keeping just over 60deg on the hottest core loaded so not heat. My old sandy does that at 1.2v with way worse cooling.
|
# ? Jul 7, 2016 00:12 |
|
My 5 and a half year old Corsair H100 (not even H100i) is starting to make grinding noises. I was hoping to wait until Kabby Lake but I'd rather replace it now, then swap it to whatever PC I have next. I'm thinking of getting a H110i 280mm but their naming system is a little weird. First there was the H110i, then the H110i GT, by CoolIT, then their was a H110i GTX, by Asetek which was a smaller rad with lower speed fans, and tubes that mounted vertically rather than out the side. The H110i GT was later named the H110i, and the H110i GTX was renamed the H115i. Is that correct? I'd also look into replacing the stock fans with some Corsair SP140's, but their website lists their 120's having both more airflow AND more static pressure AAAND run quieter than the SP140's.
|
# ? Jul 12, 2016 10:32 |
|
|
# ? Apr 19, 2024 20:50 |
|
EdEddnEddy posted:Isn't 4.4ghz at 1.29v a bit high voltage for a Skylake? Or are they just pigs to OC like Broadwell and even IB has been? Heck I need just 1.3v at max Turbo on my 3930K but most of the time I am using a lot less. Maybe it is my motherboard but mine did 1.41 volt out of the box (6700k + Asrock Z170m Extreme 4). I have to run a -0.08 voltage offset to keep temps sane. It does 4.5 ghz stable at that voltage (1.33). At "stock" voltage it just cooks itself. Most of the time it is only running 0.7-0.8 volt.
|
# ? Jul 12, 2016 11:53 |