Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Twerk from Home posted:

Interesting. I thought that recently there had been a huge push to use dedicated h264 / h265 / vp9 hardware encode / decode. I guess that Intel stuff doesn't have hardware VP9 encode and google loves vp9, so software encoding still matters.

It depends on the specific task. Hardware encode is super fast (a bottom-tier GPU can reencode like 4 1080p streams in realtime) but the quality is kinda garbage. So it gets used for stuff like streaming to Android phones and smart TVs and poo poo because it's going to be thrown away anyway and who cares as long as it's in a format that the TV can play? It's also great for real-time compressing gameplay videos. But the quality is still drastically inferior to a real encoder like x264, so if you're encoding something for archival you use the software encoder instead. The compromise solution here is that if you need to capture something realtime but in high quality you use the hardware encoder and spit out some wildly excessive bitrate (2-4x the target bitrate) that will have very few artifacts even with a lovely hardware encoder but is still smaller than raw frames, and then later you crunch it down with a software encoder.

I assume hardware decode is pretty much OK as long as the SIP core can decode fast enough and the disk can feed fast enough and so on, otherwise you'd get tons of corruption until the next keyframe. Decoding on a hardware decoder is pretty easy as long as your video fits within the decoder's feature capability level.

Paul MaudDib fucked around with this message at 23:12 on Jun 9, 2016

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

It depends on the specific task. Hardware encode is super fast (a bottom-tier GPU can reencode like 4 1080p streams in realtime) but the quality is kinda garbage. So it gets used for stuff like streaming to Android phones and smart TVs and poo poo because it's going to be thrown away anyway and who cares as long as it's in a format that the TV can play? It's also great for real-time compressing gameplay videos. But the quality is still drastically inferior to a real encoder like x264, so if you're encoding something for archival you use the software encoder instead. The compromise solution here is that if you need to capture something realtime but in high quality you use the hardware encoder and spit out some wildly excessive bitrate (2-4x the target bitrate) that will have very few artifacts even with a lovely hardware encoder but is still smaller than raw frames, and then later you crunch it down with a software encoder.

I assume hardware decode is pretty much OK as long as the SIP core can decode fast enough and the disk can feed fast enough and so on, otherwise you'd get tons of corruption until the next keyframe.

Interesting. I was thinking specifically of video calling / desktop sharing and remote meetings in general. Seems like for real-time things that won't be recorded or archived, hardware encode would be fine.

Pryor on Fire
May 14, 2013

they don't know all alien abduction experiences can be explained by people thinking saving private ryan was a documentary

Running Prime95 for a few hours used to be mandatory to demonstrate stability for overclockers, now people are afraid of it? Your CPU is going to throttle itself way before its capable of doing any damage, don't worry she's not made out of glass have some loving fun throwing her around.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Twerk from Home posted:

Interesting. I was thinking specifically of video calling / desktop sharing and remote meetings in general. Seems like for real-time things that won't be recorded or archived, hardware encode would be fine.

Yeah totally that's an entirely valid use for hardware encoding. That's what I'm getting at here, hardware encode is good at realtime but bad at archival unless you just throw so much bandwidth at it that it can't possibly gently caress up.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Pryor on Fire posted:

don't worry she's not made out of glass

you do know where silicon wafers come from right?

:goonsay:

But, SmallFFT didn't use to use AVX. AVX is a hugely more stressful load than regular FPU loads.

Nowadays Asus says Prime95 may damage your CPU.

quote:

Users should avoid running Prime95 small FFTs on 5960X CPUs when overclocked. Over 4.4GHz, the Prime software pulls 400W of power through the CPU. It is possible this can cause internal degradation of processor components.

Electromigration is a real thing and 400W is certainly enough power that I'd be concerned. Even on smaller processors SmallFFT is a massively greater load than literally anything else. I'm not joking when I describe it as a power virus, it just hammers your FPU as hard as is physically possible to go.

Paul MaudDib fucked around with this message at 23:26 on Jun 9, 2016

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

Yeah totally that's an entirely valid use for hardware encoding. That's what I'm getting at here, hardware encode is good at realtime but bad at archival unless you just throw so much bandwidth at it that it can't possibly gently caress up.

I'm really curious if hardware encoding's inherent badness is counterbalanced by newer codecs. Now that HEVC hardware encoding is becoming more common, i'd be curious to see hardware HEVC vs software h264.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Instant Sunrise posted:

How does Prime95 compare to using something like IntelBurnTest to set an OC?

ibt just runs lapack/linpack a bunch of times

so it wont find random fuckups in say vmenter/vmexit

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Twerk from Home posted:

I'm really curious if hardware encoding's inherent badness is counterbalanced by newer codecs. Now that HEVC hardware encoding is becoming more common, i'd be curious to see hardware HEVC vs software h264.

hw encoding is fine for non-autismal people

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Malcolm XML posted:

hw encoding is fine for non-autismal people

He says to a bunch of goons in a thread dedicated to Intel CPUs.

GRINDCORE MEGGIDO
Feb 28, 1985


Quick question. If anyone has a 6600k at around 4.5ghz and a 1080, what's the system power use? Just sourcing a power supply.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Theoretically 500W would be enough but go with a 750W for overhead.

GRINDCORE MEGGIDO
Feb 28, 1985


BIG HEADLINE posted:

Theoretically 500W would be enough but go with a 750W for overhead.

Thanks! Nabbed a 750w platinum. I finally found an excuse to upgrade (other half would like my 2500k).

I wonder if any of the newer chips that support s1151 will have cache like the 5775c, that'd make this whole system upgrade as future proof as these things get.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Skylake-R is the chip no one seems to know Intel made, seeing as people are still mentioning the 5775C.

EDIT: Which might be because they're all hard-soldered into AIW motherboards.

BIG HEADLINE fucked around with this message at 23:46 on Jun 12, 2016

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
A 400w psu would be enough for a 1080 and an i7 with like 100w overhead lol

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Don Lapre posted:

A 400w psu would be enough for a 1080 and an i7 with like 100w overhead lol

I had to assume he'd pick up a 2x8 1080 and overclock the i7. So yeah - 500W is more than enough, but we're talking the difference of like $20-30 between a 500W and 750W Platinum.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

BIG HEADLINE posted:

Skylake-R is the chip no one seems to know Intel made, seeing as people are still mentioning the 5775C.
They don't count because, yes,

quote:

EDIT: Which might be because they're all hard-soldered into AIW motherboards.
so you can't just put them on a random z170 board which defeats the whole point.

Ak Gara
Jul 29, 2005

That's just the way he rolls.
I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c.

Would a 6700k help? Does Java care about cores vs threads?

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Ak Gara posted:

I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c.

Would a 6700k help? Does Java care about cores vs threads?

Just curious, but are you using a lot of mods as well? It's kinda unusual for Minecraft to peg a 2500k without them.

You will certainly get better performance with the newer chip regardless, but it seems like something's gone fucky with your Minecraft install/world.

Ak Gara
Jul 29, 2005

That's just the way he rolls.

fishmech posted:

Just curious, but are you using a lot of mods as well? It's kinda unusual for Minecraft to peg a 2500k without them.

You will certainly get better performance with the newer chip regardless, but it seems like something's gone fucky with your Minecraft install/world.

100's of mods! :v:

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Ak Gara posted:

100's of mods! :v:

In that case, the new chip will probably also peg at 100% on all cores with all that stuff you have, but things will run better while doing it. :v:

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
Honestly if it's that multithreaded (I assume because the mods are all separate?) and you can afford it, maybe it's time to move up to something with more than four cores.

Khorne
May 1, 2002

mediaphage posted:

Honestly if it's that multithreaded (I assume because the mods are all separate?) and you can afford it, maybe it's time to move up to something with more than four cores.
Modded MC primarily uses a single core. It's not very multithreaded at all.

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through

Khorne posted:

Modded MC primarily uses a single core. It's not very multithreaded at all.

Sorry, I never really got into it, so thanks for clarifying. Wonder what's up with the core saturation then.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Ak Gara posted:

I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c.

Would a 6700k help? Does Java care about cores vs threads?

That's nuts. I also doubt how much performance increase you'd really get from upgrading, seeing as you're at 5GHz...

An OC'd 6700K would be faster, but by enough to cope smoothly with a load maxing out a 5GHz 2500K? I'm not sure.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Ak Gara posted:

I'm getting tired of Minecraft of all loving things, maxing out my 5ghz 2500k. 100% cpu usage across all 4 cores @ 67c.

Would a 6700k help? Does Java care about cores vs threads?

Java itself supports multi threading, but it sounds like Minecraft doesn't.

AFAIK the only big-name language that can't use multi threading is Python, because of the Global Interpreter Lock. And any other lovely knockoff languages that mimic it I guess.

EdEddnEddy
Apr 5, 2012



Try Minecraft stock, and add mods until it hits 100%, then well don't use that mod or see if they have a fixed version or something? A 5Ghz 2500K is no slouch even today.

Ak Gara
Jul 29, 2005

That's just the way he rolls.

EdEddnEddy posted:

Try Minecraft stock, and add mods until it hits 100%, then well don't use that mod or see if they have a fixed version or something? A 5Ghz 2500K is no slouch even today.

That sounds like a good idea. After trying stock cpu usage was 10%

I bet it's something stupid like flowers.

[edit] When people say Minecraft isn't multithreaded, does that include multi core? 8 core vs 4 core etc... I've never looked into hyperthreads and seen what they do / how they work.

is that good
Apr 14, 2012

Ak Gara posted:

That sounds like a good idea. After trying stock cpu usage was 10%

I bet it's something stupid like flowers.

[edit] When people say Minecraft isn't multithreaded, does that include multi core? 8 core vs 4 core etc... I've never looked into hyperthreads and seen what they do / how they work.

Yeah threads are just computer science speak for 'the bit of the process that changes a lot while it's executing' so multithreading is needed if you want to split that program's work up between more cores as well as If you want to split it up between cores that can handle multiple threads
E: hyperthreading is actually pretty cool and impressive basically you add a bit more stuff to the core so that you can keep the working data of more than one process in there and whenever one process doesn't use all of the core's circuitry or is waiting for something the other process does. And then there's a bunch of clever tricks to make it work faster and more efficiently as well as sometimes just biting the bullet and jamming in more trqnsistors

is that good fucked around with this message at 20:30 on Jun 14, 2016

EdEddnEddy
Apr 5, 2012



Ak Gara posted:

That sounds like a good idea. After trying stock cpu usage was 10%

I bet it's something stupid like flowers.

[edit] When people say Minecraft isn't multithreaded, does that include multi core? 8 core vs 4 core etc... I've never looked into hyperthreads and seen what they do / how they work.

Next you need to try Minecraft in VR.

You will never go back once you are IN Minecraft.

EdEddnEddy
Apr 5, 2012



Also Minecraft with Motion Controls + Virtux Omni = Walk around Minecraft without having to teleport for the most immersive Minecraft experience.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Ak Gara posted:

That sounds like a good idea. After trying stock cpu usage was 10%

I bet it's something stupid like flowers.

Getting rid of any problematic mods is the correct solution, but what is your fundamental problem? Is the problem that Minecraft is running slow even when using all of the CPU, or is the problem that your other programs are running slow because MC is hogging all the CPU? If the problem is the latter you may mitigate it by going to Task Manager and configure process affinity for Minecraft process, so that is only allowed to use 2 or 3 cores of your CPU.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Ak Gara posted:

That sounds like a good idea. After trying stock cpu usage was 10%

I bet it's something stupid like flowers.

[edit] When people say Minecraft isn't multithreaded, does that include multi core? 8 core vs 4 core etc... I've never looked into hyperthreads and seen what they do / how they work.

To explain at length, HyperThreading is Intel's name for their implementation of a concept called simultaneous multithreading or SMT. This is a way of extracting thread-level parallelism from your load to finish it faster.

Processors have different modules internally for handling different types of calculations. Threads are sequences of instructions which if executed by hand would be done one at a time, but most processors (with what's called a superscalar architecture) are able to determine if the result of an instruction at the head of the queue depends upon the result of anything that's still underway or not. If this is not the case and the waiting and working instructions use different modules, then there is no need to wait to clear the pipeline and the processor finishes the thread faster. They also generally double (or triple+) up on the pipelines that handle more common types of instructions to avoid getting bottlenecked on those.

However, if you have more than one thread they still have to take turns even if the modules they need aren't being used by the other active threads. The most obvious way to prevent this is by adding more processors (or more cores), but this is expensive and has its own limitations.

Simultaneous multithreading or specifically HT allows two (or more, for some non-x86 architectures) threads to queue up on one physical core at once, both trying to saturate all of that core's available resources. To the OS, this looks like an extra core for every extra thread that's allowed to queue up. However, the performance benefit obviously doesn't match what a real second core would get you. In the real world, I think it generally has a benefit equivalent to 10-30% of an extra core depending on what kind of load you have. It does add some power consumption too, but not as much as the performance benefit if you're actually loading all of the logical cores.

Eletriarnation fucked around with this message at 00:14 on Jun 15, 2016

Pryor on Fire
May 14, 2013

they don't know all alien abduction experiences can be explained by people thinking saving private ryan was a documentary

Hyperthreading is generally useless outside of a few specific server and media encoding workloads, as good as Intel has been at marketing it you probably shouldn't care. And even that 10-30% on the server workload that was written for hyperthreading is overly optimistic, you really only see that in the most synthetic of synthetic benchmarks. It's certainly not $100 worth of caring, 6600k is the way to go.

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through

Pryor on Fire posted:

Hyperthreading is generally useless outside of a few specific server and media encoding workloads, as good as Intel has been at marketing it you probably shouldn't care. And even that 10-30% on the server workload that was written for hyperthreading is overly optimistic, you really only see that in the most synthetic of synthetic benchmarks. It's certainly not $100 worth of caring, 6600k is the way to go.

I was about to disagree but then I realized the only thing I noticed a difference on was my editing time (which for me was a big deal), as you pointed out.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
<Stupidity>

necrobobsledder fucked around with this message at 14:51 on Jun 16, 2016

phongn
Oct 21, 2006

necrobobsledder posted:

Hyperthreading 1.0 happened well over a decade ago when Java applications were so king (heck, they still are honestly within the Fortune 500) and its best cases were really contrived but ok enough. Only with the second incarnation of Hyperthreading were CPUs better able to understand scheduling of micro-ops and caching enough to make more improvements in scheduling hardware threads better.

It didn't help that Netburst's replay system could do all sorts of horrible things to the pipeline.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

necrobobsledder posted:

SMT has a lot more to it than just instruction scheduling, but the fundamental reason why Hyperthreading (tm) / SMT only gets you two "logical" cores is that SMT is a form of register file and ALU duty cycle utilization similar to how DDR RAM works. That is, in an SMT processor you are able to load registers and process them on both the high and low sides of a clock signal. Because there's a pretty big flurry of bits flipping (causes more noise than necessary in certain circuits) when you do that on top of cache coherency and branch prediction issues this makes sending out instructions correctly and efficiently pretty difficult.

This is a really weird and misleading description of more or less everything you touched on, imo. DDR RAM isn't even about multiplexing two data streams into one interface, nor are Intel's SMT implementations based on DDR clocking of registers (AFAIK), nor does switching noise have any impact specific to correct and efficient instruction dispatch in the presence of HT.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

necrobobsledder posted:

SMT has a lot more to it than just instruction scheduling, but the fundamental reason why Hyperthreading (tm) / SMT only gets you two "logical" cores is that SMT is a form of register file and ALU duty cycle utilization similar to how DDR RAM works. That is, in an SMT processor you are able to load registers and process them on both the high and low sides of a clock signal. Because there's a pretty big flurry of bits flipping (causes more noise than necessary in certain circuits) when you do that on top of cache coherency and branch prediction issues this makes sending out instructions correctly and efficiently pretty difficult.

Hyperthreading 1.0 happened well over a decade ago when Java applications were so king (heck, they still are honestly within the Fortune 500) and its best cases were really contrived but ok enough. Only with the second incarnation of Hyperthreading were CPUs better able to understand scheduling of micro-ops and caching enough to make more improvements in scheduling hardware threads better.

POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit.

Hyperthreading is not related to DDR, or triggering anything on both the rising and falling edge of a clock signal. It's just keeping track of multiple states for a given set of execution units, and switching rapidly between them. This is most helpful when you've got workloads that are hitting the slower caches or main memory frequently - there's enough idle time that you want to find some work to fill it, but not enough that it'd be worth it to hit the OS-level scheduler. You can track as much state as you like, but unless you're running embarrassingly parallel work that blocks a lot, diminishing returns kick in quickly. So, most implementations for consumer, workstation, and ordinary server hardware don't bother with more than two threads per core.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Space Gopher posted:

POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit.
...
Hyperthreading is not related to DDR, or triggering anything on both the rising and falling edge of a clock signal....
I glossed over a lot admittedly but the clocking concept came up in school when my digital logic professor, whom worked with Susan Eggers at UW on the reference SMT architecture, about a lot of the fundamentals of where SMT came from. This was back in 2002 and Intel hadn't invested much in schedulers compared to waging the clock speed war. Regardless of the implementation, Eggers, et al's primary work of importance was coming up with a microarchitecture for SMT rather than signaling tricks so that's a gently caress-up on my part anyway and I've just deleted it.

Space Gopher posted:

POWER8 supports eight SMT threads per core. Both the Xeon Phi products have done four-way SMT on x86. There is no two-logical-core limit.
I'm aware of that but left out the whole "but there's a lot more than just register files and ALUs in a pipeline" and kinda boneheadedly forgot the whole SMT != Hyperthreading part in the first place. In fact, the threads on Xeon Phi are not supposed to use the HyperThreading brand name since Phi is in-order-execution only https://software.intel.com/en-us/forums/intel-many-integrated-core/topic/515522

Adbot
ADBOT LOVES YOU

redeyes
Sep 14, 2002

by Fluffdaddy
Just got an ASRock X99 Fatality Killer system. Bluescreening like crazy. First issue, the BIOS/UEFI battery, totally dead. System wont turn on without a good battery. Next, bluescreen instantly playing youtube. That turned out to be the Killer gaming NIC. Just turned off that stupid thing and use the built in Intel NIC. Last, random bluescreens pointing to the Windows kernel which usually means Memory. System uses G.Skill DDR4 2666 stuff. I upped the voltage to 1.25v from 1.20 and that seems to have stabilized the thing. G.Skill poo poo sucks and I don't trust it.
Think I should go ahead and replace the memory?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply