|
Twerk from Home posted:Interesting. I thought that recently there had been a huge push to use dedicated h264 / h265 / vp9 hardware encode / decode. I guess that Intel stuff doesn't have hardware VP9 encode and google loves vp9, so software encoding still matters. It depends on the specific task. Hardware encode is super fast (a bottom-tier GPU can reencode like 4 1080p streams in realtime) but the quality is kinda garbage. So it gets used for stuff like streaming to Android phones and smart TVs and poo poo because it's going to be thrown away anyway and who cares as long as it's in a format that the TV can play? It's also great for real-time compressing gameplay videos. But the quality is still drastically inferior to a real encoder like x264, so if you're encoding something for archival you use the software encoder instead. The compromise solution here is that if you need to capture something realtime but in high quality you use the hardware encoder and spit out some wildly excessive bitrate (2-4x the target bitrate) that will have very few artifacts even with a lovely hardware encoder but is still smaller than raw frames, and then later you crunch it down with a software encoder. I assume hardware decode is pretty much OK as long as the SIP core can decode fast enough and the disk can feed fast enough and so on, otherwise you'd get tons of corruption until the next keyframe. Decoding on a hardware decoder is pretty easy as long as your video fits within the decoder's feature capability level. Paul MaudDib fucked around with this message at 23:12 on Jun 9, 2016 |
# ? Jun 9, 2016 23:10 |
|
|
# ? May 8, 2024 07:26 |
|
Paul MaudDib posted:It depends on the specific task. Hardware encode is super fast (a bottom-tier GPU can reencode like 4 1080p streams in realtime) but the quality is kinda garbage. So it gets used for stuff like streaming to Android phones and smart TVs and poo poo because it's going to be thrown away anyway and who cares as long as it's in a format that the TV can play? It's also great for real-time compressing gameplay videos. But the quality is still drastically inferior to a real encoder like x264, so if you're encoding something for archival you use the software encoder instead. The compromise solution here is that if you need to capture something realtime but in high quality you use the hardware encoder and spit out some wildly excessive bitrate (2-4x the target bitrate) that will have very few artifacts even with a lovely hardware encoder but is still smaller than raw frames, and then later you crunch it down with a software encoder. Interesting. I was thinking specifically of video calling / desktop sharing and remote meetings in general. Seems like for real-time things that won't be recorded or archived, hardware encode would be fine.
|
# ? Jun 9, 2016 23:12 |
Running Prime95 for a few hours used to be mandatory to demonstrate stability for overclockers, now people are afraid of it? Your CPU is going to throttle itself way before its capable of doing any damage, don't worry she's not made out of glass have some loving fun throwing her around.
|
|
# ? Jun 9, 2016 23:13 |
|
Twerk from Home posted:Interesting. I was thinking specifically of video calling / desktop sharing and remote meetings in general. Seems like for real-time things that won't be recorded or archived, hardware encode would be fine. Yeah totally that's an entirely valid use for hardware encoding. That's what I'm getting at here, hardware encode is good at realtime but bad at archival unless you just throw so much bandwidth at it that it can't possibly gently caress up.
|
# ? Jun 9, 2016 23:13 |
|
Pryor on Fire posted:don't worry she's not made out of glass you do know where silicon wafers come from right? But, SmallFFT didn't use to use AVX. AVX is a hugely more stressful load than regular FPU loads. Nowadays Asus says Prime95 may damage your CPU. quote:Users should avoid running Prime95 small FFTs on 5960X CPUs when overclocked. Over 4.4GHz, the Prime software pulls 400W of power through the CPU. It is possible this can cause internal degradation of processor components. Electromigration is a real thing and 400W is certainly enough power that I'd be concerned. Even on smaller processors SmallFFT is a massively greater load than literally anything else. I'm not joking when I describe it as a power virus, it just hammers your FPU as hard as is physically possible to go. Paul MaudDib fucked around with this message at 23:26 on Jun 9, 2016 |
# ? Jun 9, 2016 23:14 |
|
Paul MaudDib posted:Yeah totally that's an entirely valid use for hardware encoding. That's what I'm getting at here, hardware encode is good at realtime but bad at archival unless you just throw so much bandwidth at it that it can't possibly gently caress up. I'm really curious if hardware encoding's inherent badness is counterbalanced by newer codecs. Now that HEVC hardware encoding is becoming more common, i'd be curious to see hardware HEVC vs software h264.
|
# ? Jun 9, 2016 23:34 |
|
Instant Sunrise posted:How does Prime95 compare to using something like IntelBurnTest to set an OC? ibt just runs lapack/linpack a bunch of times so it wont find random fuckups in say vmenter/vmexit
|
# ? Jun 9, 2016 23:56 |
|
Twerk from Home posted:I'm really curious if hardware encoding's inherent badness is counterbalanced by newer codecs. Now that HEVC hardware encoding is becoming more common, i'd be curious to see hardware HEVC vs software h264. hw encoding is fine for non-autismal people
|
# ? Jun 9, 2016 23:56 |
|
Malcolm XML posted:hw encoding is fine for non-autismal people He says to a bunch of goons in a thread dedicated to Intel CPUs.
|
# ? Jun 12, 2016 21:22 |
|
Quick question. If anyone has a 6600k at around 4.5ghz and a 1080, what's the system power use? Just sourcing a power supply.
|
# ? Jun 12, 2016 22:37 |
|
Theoretically 500W would be enough but go with a 750W for overhead.
|
# ? Jun 12, 2016 22:39 |
|
BIG HEADLINE posted:Theoretically 500W would be enough but go with a 750W for overhead. Thanks! Nabbed a 750w platinum. I finally found an excuse to upgrade (other half would like my 2500k). I wonder if any of the newer chips that support s1151 will have cache like the 5775c, that'd make this whole system upgrade as future proof as these things get.
|
# ? Jun 12, 2016 22:52 |
|
Skylake-R is the chip no one seems to know Intel made, seeing as people are still mentioning the 5775C. EDIT: Which might be because they're all hard-soldered into AIW motherboards. BIG HEADLINE fucked around with this message at 23:46 on Jun 12, 2016 |
# ? Jun 12, 2016 23:43 |
|
A 400w psu would be enough for a 1080 and an i7 with like 100w overhead lol
|
# ? Jun 13, 2016 00:06 |
|
Don Lapre posted:A 400w psu would be enough for a 1080 and an i7 with like 100w overhead lol I had to assume he'd pick up a 2x8 1080 and overclock the i7. So yeah - 500W is more than enough, but we're talking the difference of like $20-30 between a 500W and 750W Platinum.
|
# ? Jun 13, 2016 00:09 |
|
BIG HEADLINE posted:Skylake-R is the chip no one seems to know Intel made, seeing as people are still mentioning the 5775C. quote:EDIT: Which might be because they're all hard-soldered into AIW motherboards.
|
# ? Jun 13, 2016 08:31 |
|
I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c. Would a 6700k help? Does Java care about cores vs threads?
|
# ? Jun 14, 2016 15:55 |
|
Ak Gara posted:I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c. Just curious, but are you using a lot of mods as well? It's kinda unusual for Minecraft to peg a 2500k without them. You will certainly get better performance with the newer chip regardless, but it seems like something's gone fucky with your Minecraft install/world.
|
# ? Jun 14, 2016 16:06 |
|
fishmech posted:Just curious, but are you using a lot of mods as well? It's kinda unusual for Minecraft to peg a 2500k without them. 100's of mods!
|
# ? Jun 14, 2016 16:26 |
|
Ak Gara posted:100's of mods! In that case, the new chip will probably also peg at 100% on all cores with all that stuff you have, but things will run better while doing it.
|
# ? Jun 14, 2016 16:53 |
|
Honestly if it's that multithreaded (I assume because the mods are all separate?) and you can afford it, maybe it's time to move up to something with more than four cores.
|
# ? Jun 14, 2016 18:48 |
|
mediaphage posted:Honestly if it's that multithreaded (I assume because the mods are all separate?) and you can afford it, maybe it's time to move up to something with more than four cores.
|
# ? Jun 14, 2016 18:53 |
|
Khorne posted:Modded MC primarily uses a single core. It's not very multithreaded at all. Sorry, I never really got into it, so thanks for clarifying. Wonder what's up with the core saturation then.
|
# ? Jun 14, 2016 19:08 |
|
Ak Gara posted:I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c. That's nuts. I also doubt how much performance increase you'd really get from upgrading, seeing as you're at 5GHz... An OC'd 6700K would be faster, but by enough to cope smoothly with a load maxing out a 5GHz 2500K? I'm not sure.
|
# ? Jun 14, 2016 19:14 |
|
Ak Gara posted:I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c. Java itself supports multi threading, but it sounds like Minecraft doesn't. AFAIK the only big-name language that can't use multi threading is Python, because of the Global Interpreter Lock. And any other lovely knockoff languages that mimic it I guess.
|
# ? Jun 14, 2016 19:24 |
|
Try Minecraft stock, and add mods until it hits 100%, then well don't use that mod or see if they have a fixed version or something? A 5Ghz 2500K is no slouch even today.
|
# ? Jun 14, 2016 19:32 |
|
EdEddnEddy posted:Try Minecraft stock, and add mods until it hits 100%, then well don't use that mod or see if they have a fixed version or something? A 5Ghz 2500K is no slouch even today. That sounds like a good idea. After trying stock cpu usage was 10% I bet it's something stupid like flowers. [edit] When people say Minecraft isn't multithreaded, does that include multi core? 8 core vs 4 core etc... I've never looked into hyperthreads and seen what they do / how they work.
|
# ? Jun 14, 2016 20:05 |
|
Ak Gara posted:That sounds like a good idea. After trying stock cpu usage was 10% Yeah threads are just computer science speak for 'the bit of the process that changes a lot while it's executing' so multithreading is needed if you want to split that program's work up between more cores as well as If you want to split it up between cores that can handle multiple threads E: hyperthreading is actually pretty cool and impressive basically you add a bit more stuff to the core so that you can keep the working data of more than one process in there and whenever one process doesn't use all of the core's circuitry or is waiting for something the other process does. And then there's a bunch of clever tricks to make it work faster and more efficiently as well as sometimes just biting the bullet and jamming in more trqnsistors is that good fucked around with this message at 20:30 on Jun 14, 2016 |
# ? Jun 14, 2016 20:26 |
|
Ak Gara posted:That sounds like a good idea. After trying stock cpu usage was 10% Next you need to try Minecraft in VR. You will never go back once you are IN Minecraft.
|
# ? Jun 14, 2016 22:08 |
|
Also Minecraft with Motion Controls + Virtux Omni = Walk around Minecraft without having to teleport for the most immersive Minecraft experience.
|
# ? Jun 14, 2016 22:08 |
|
Ak Gara posted:That sounds like a good idea. After trying stock cpu usage was 10% Getting rid of any problematic mods is the correct solution, but what is your fundamental problem? Is the problem that Minecraft is running slow even when using all of the CPU, or is the problem that your other programs are running slow because MC is hogging all the CPU? If the problem is the latter you may mitigate it by going to Task Manager and configure process affinity for Minecraft process, so that is only allowed to use 2 or 3 cores of your CPU.
|
# ? Jun 14, 2016 23:49 |
|
Ak Gara posted:That sounds like a good idea. After trying stock cpu usage was 10% To explain at length, HyperThreading is Intel's name for their implementation of a concept called simultaneous multithreading or SMT. This is a way of extracting thread-level parallelism from your load to finish it faster. Processors have different modules internally for handling different types of calculations. Threads are sequences of instructions which if executed by hand would be done one at a time, but most processors (with what's called a superscalar architecture) are able to determine if the result of an instruction at the head of the queue depends upon the result of anything that's still underway or not. If this is not the case and the waiting and working instructions use different modules, then there is no need to wait to clear the pipeline and the processor finishes the thread faster. They also generally double (or triple+) up on the pipelines that handle more common types of instructions to avoid getting bottlenecked on those. However, if you have more than one thread they still have to take turns even if the modules they need aren't being used by the other active threads. The most obvious way to prevent this is by adding more processors (or more cores), but this is expensive and has its own limitations. Simultaneous multithreading or specifically HT allows two (or more, for some non-x86 architectures) threads to queue up on one physical core at once, both trying to saturate all of that core's available resources. To the OS, this looks like an extra core for every extra thread that's allowed to queue up. However, the performance benefit obviously doesn't match what a real second core would get you. In the real world, I think it generally has a benefit equivalent to 10-30% of an extra core depending on what kind of load you have. It does add some power consumption too, but not as much as the performance benefit if you're actually loading all of the logical cores. Eletriarnation fucked around with this message at 00:14 on Jun 15, 2016 |
# ? Jun 14, 2016 23:59 |
Hyperthreading is generally useless outside of a few specific server and media encoding workloads, as good as Intel has been at marketing it you probably shouldn't care. And even that 10-30% on the server workload that was written for hyperthreading is overly optimistic, you really only see that in the most synthetic of synthetic benchmarks. It's certainly not $100 worth of caring, 6600k is the way to go.
|
|
# ? Jun 15, 2016 15:54 |
|
Pryor on Fire posted:Hyperthreading is generally useless outside of a few specific server and media encoding workloads, as good as Intel has been at marketing it you probably shouldn't care. And even that 10-30% on the server workload that was written for hyperthreading is overly optimistic, you really only see that in the most synthetic of synthetic benchmarks. It's certainly not $100 worth of caring, 6600k is the way to go. I was about to disagree but then I realized the only thing I noticed a difference on was my editing time (which for me was a big deal), as you pointed out.
|
# ? Jun 15, 2016 18:42 |
|
<Stupidity>
necrobobsledder fucked around with this message at 14:51 on Jun 16, 2016 |
# ? Jun 15, 2016 22:32 |
|
necrobobsledder posted:Hyperthreading 1.0 happened well over a decade ago when Java applications were so king (heck, they still are honestly within the Fortune 500) and its best cases were really contrived but ok enough. Only with the second incarnation of Hyperthreading were CPUs better able to understand scheduling of micro-ops and caching enough to make more improvements in scheduling hardware threads better. It didn't help that Netburst's replay system could do all sorts of horrible things to the pipeline.
|
# ? Jun 15, 2016 23:09 |
|
necrobobsledder posted:SMT has a lot more to it than just instruction scheduling, but the fundamental reason why Hyperthreading (tm) / SMT only gets you two "logical" cores is that SMT is a form of register file and ALU duty cycle utilization similar to how DDR RAM works. That is, in an SMT processor you are able to load registers and process them on both the high and low sides of a clock signal. Because there's a pretty big flurry of bits flipping (causes more noise than necessary in certain circuits) when you do that on top of cache coherency and branch prediction issues this makes sending out instructions correctly and efficiently pretty difficult. This is a really weird and misleading description of more or less everything you touched on, imo. DDR RAM isn't even about multiplexing two data streams into one interface, nor are Intel's SMT implementations based on DDR clocking of registers (AFAIK), nor does switching noise have any impact specific to correct and efficient instruction dispatch in the presence of HT.
|
# ? Jun 16, 2016 05:49 |
|
necrobobsledder posted:SMT has a lot more to it than just instruction scheduling, but the fundamental reason why Hyperthreading (tm) / SMT only gets you two "logical" cores is that SMT is a form of register file and ALU duty cycle utilization similar to how DDR RAM works. That is, in an SMT processor you are able to load registers and process them on both the high and low sides of a clock signal. Because there's a pretty big flurry of bits flipping (causes more noise than necessary in certain circuits) when you do that on top of cache coherency and branch prediction issues this makes sending out instructions correctly and efficiently pretty difficult. POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit. Hyperthreading is not related to DDR, or triggering anything on both the rising and falling edge of a clock signal. It's just keeping track of multiple states for a given set of execution units, and switching rapidly between them. This is most helpful when you've got workloads that are hitting the slower caches or main memory frequently - there's enough idle time that you want to find some work to fill it, but not enough that it'd be worth it to hit the OS-level scheduler. You can track as much state as you like, but unless you're running embarrassingly parallel work that blocks a lot, diminishing returns kick in quickly. So, most implementations for consumer, workstation, and ordinary server hardware don't bother with more than two threads per core.
|
# ? Jun 16, 2016 13:43 |
|
Space Gopher posted:POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit. Space Gopher posted:POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit.
|
# ? Jun 16, 2016 14:51 |
|
|
# ? May 8, 2024 07:26 |
|
Just got an ASRock X99 Fatality Killer system. Bluescreening like crazy. First issue, the BIOS/UEFI battery, totally dead. System wont turn on without a good battery. Next, bluescreen instantly playing youtube. That turned out to be the Killer gaming NIC. Just turned off that stupid thing and use the built in Intel NIC. Last, random bluescreens pointing to the Windows kernel which usually means Memory. System uses G.Skill DDR4 2666 stuff. I upped the voltage to 1.25v from 1.20 and that seems to have stabilized the thing. G.Skill poo poo sucks and I don't trust it. Think I should go ahead and replace the memory?
|
# ? Jun 16, 2016 15:04 |