Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Cardboard Box A posted:

But if you ported Crysis to a quantum computer, would you finally get decent performance?

I could run crysis perfectly at max in early 2008, and it only cost me 8 grand.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The Lord Bude posted:

I could run crysis perfectly at max in early 2008, and it only cost me 8 grand.

Was that the quad-9800 setup? Still, as silly as it seems in retrospect, it didn't put anyone on the street. And if it actually does a thing, it's not any more a waste than any other luxury purchase, really. Surprised that SLI worked that well if I am remembering correctly, I thought it was still pretty crap back then from my experiences but I had way more time with Crossfire and we know for a damned fact that it was horrible, haha. Still played Bioshock like crazy but not so much other demanding titles...

I wonder how long it'll be until an integrated GPU from Intel outperforms the last hardware setup that really knocked me out visually. Hopefully not too long.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Agreed posted:

Was that the quad-9800 setup? Still, as silly as it seems in retrospect, it didn't put anyone on the street. And if it actually does a thing, it's not any more a waste than any other luxury purchase, really. Surprised that SLI worked that well if I am remembering correctly, I thought it was still pretty crap back then from my experiences but I had way more time with Crossfire and we know for a damned fact that it was horrible, haha. Still played Bioshock like crazy but not so much other demanding titles...

I wonder how long it'll be until an integrated GPU from Intel outperforms the last hardware setup that really knocked me out visually. Hopefully not too long.

yup. 9800GX2s. Both of them are still going strong in different friend's PCs. When the motherboard died the rest of the PC got divvied up between about 4 people. Only SLI issue I ever had was having to disable QuadSLi to play WoW, but my fps never dropped into double digits playing that game anyhow.

DaNzA
Sep 11, 2001

:D
Grimey Drawer
The 9800GX2 was also before a time of GPU being able to clock down when there's a dual monitor setup with two different desktop resolutions. So in the end the GX2 just never clocked down and roast itself to death :v:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Cardboard Box A posted:

But if you ported Crysis to a quantum computer, would you finally get decent performance?
Unless the Cry engine has a lot of FFTs, DCTs, prime number factorization, graph searching, and sorting massive lists there's not much that would be of benefit to porting. A quantum computer would likely be implemented on a classical computing setup first as a sort of coprocessor like how FPUs were done a couple decades ago.

Assepoester
Jul 18, 2004
Probation
Can't post for 10 years!
Melman v2

The Lord Bude posted:

I could run crysis perfectly at max in early 2008, and it only cost me 8 grand.
At 16x AA?





necrobobsledder posted:

Unless the Cry engine has a lot of FFTs, DCTs, prime number factorization, graph searching, and sorting massive lists there's not much that would be of benefit to porting. A quantum computer would likely be implemented on a classical computing setup first as a sort of coprocessor like how FPUs were done a couple decades ago.
So it wouldn't help at all huh :(

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Only 4x....but that's always been enough.

k-uno
Jun 20, 2004

KillHour posted:

A normal computer stores data in memory as a state (A bit is either '0' or '1', in binary systems). A quantum computer takes advantage of the fact that you can have a particle be in a superposition of multiple states (A qubit [short for quantum bit] can be both '0' and '1' at the same time.). For most math we do on a computer, this doesn't mean jack-squat. However, there are certain problems that are hard on classical computers, but aren't hard on a quantum computer ('Hard' has a specific meaning in computer science). It's these problems that quantum computers help solve; they're not better than classical computers in any generalized sense.

Edit: Watch this. https://www.youtube.com/watch?v=g_IaVepNDT4

A quantum computer is capable of solving certain classes of problems vastly more quickly than any possible classical computer, precisely because of the superposition principle. The speedup from a quantum algorithm ranges from polynomial (searching a non-ordered list of N elements for a specific value takes ~N queries on a classical computer, versus ~N^(1/2) on a quantum computer) to sub exponential (factoring large numbers is polynomial on a quantum computer, sub exponential with the best known classical algorithm). The hand-wavy reason for this is that the superposition principle allows you to do calculations with an insane degree of parallelism. Imagine that you have one bit, and you want to operate with some function on both 0 and 1. In a classical computer, this means you have to run the function twice, but in a quantum computer, you can set a qubit (quantum bit) to be both 0 and 1 at the same time, and operate on both values in a single function call. For one bit, this is a factor of two increase, but for, say, 32 qubits, you can arrange a superposition of all 2^32 ~ 4 billion possible configurations simultaneously, and operate on all of them in one step. Now, the catch is that the result you get when you measure the state at the end of the algorithm is random, but if you choose your algorithm cleverly, the "wrong" configurations can interfere destructively making the random result heavily biased toward the problem solution.

I don't think we'll see a real, commercial quantum computer for at least 5 years though (d-wave's machine is more like an FPGA built out of superconductors and whether there's any quantum speedup or not isn't really clear), because the catch is that these superpositions are very delicate, and if the state of a qubit is measured at any time (e.g. through a random interaction with the environment), then it's frozen into one of the values and the superposition is lost. This can be compensated with quantum error correction protocols but they require thousands of redundant qubits to be scalable, and most current experiments are on 5 qubits or less.

Deuce
Jun 18, 2004
Mile High Club

k-uno posted:

A quantum computer is capable of solving certain classes of problems vastly more quickly than any possible classical computer, precisely because of the superposition principle. The speedup from a quantum algorithm ranges from polynomial (searching a non-ordered list of N elements for a specific value takes ~N queries on a classical computer, versus ~N^(1/2) on a quantum computer) to sub exponential (factoring large numbers is polynomial on a quantum computer, sub exponential with the best known classical algorithm). The hand-wavy reason for this is that the superposition principle allows you to do calculations with an insane degree of parallelism. Imagine that you have one bit, and you want to operate with some function on both 0 and 1. In a classical computer, this means you have to run the function twice, but in a quantum computer, you can set a qubit (quantum bit) to be both 0 and 1 at the same time, and operate on both values in a single function call. For one bit, this is a factor of two increase, but for, say, 32 qubits, you can arrange a superposition of all 2^32 ~ 4 billion possible configurations simultaneously, and operate on all of them in one step. Now, the catch is that the result you get when you measure the state at the end of the algorithm is random, but if you choose your algorithm cleverly, the "wrong" configurations can interfere destructively making the random result heavily biased toward the problem solution.

I don't think we'll see a real, commercial quantum computer for at least 5 years though (d-wave's machine is more like an FPGA built out of superconductors and whether there's any quantum speedup or not isn't really clear), because the catch is that these superpositions are very delicate, and if the state of a qubit is measured at any time (e.g. through a random interaction with the environment), then it's frozen into one of the values and the superposition is lost. This can be compensated with quantum error correction protocols but they require thousands of redundant qubits to be scalable, and most current experiments are on 5 qubits or less.

So, magic. Got it.

Diviance
Feb 11, 2004

Television rules the nation.

Deuce posted:

So, magic. Got it.

That's the way I read it.

Police Automaton
Mar 17, 2009
"You are standing in a thread. Someone has made an insightful post."
LOOK AT insightful post
"It's a pretty good post."
HATE post
"I don't understand"
SHIT ON post
"You shit on the post. Why."
The real magic is how you can make a game that looks like Crysis totally boring.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD

Police Automaton posted:

The real magic is how you can make a game that looks like Crysis totally boring.

It's not like you can write an original story for a FPS now.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Police Automaton posted:

The real magic is how you can make a game that looks like Crysis totally boring.

Personally, the crysis series games are my favourite FPSs. (after borderlands 2 but I'm not sure if that game technically counts).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The Lord Bude posted:

Personally, the crysis series games are my favourite FPSs. (after borderlands 2 but I'm not sure if that game technically counts).

Same here, actually. As mentioned stories in FPS games are usually pretty balls anyway, but they nailed gameplay in Crysis and Crysis 3. Crysis 2 was kinda meh, but it helped me be cool with their simplification of the armor mechanics. And Borderlands 2 is a game perma on my SSD.

Pretty sure if I'd had more money back when I was especially silly about this poo poo I'd have ended up with a pair of 9800s too. Well, hell, I did end up with a pair of 4850s, haha. My flirtation with AMD cards after a long time using nVidia; I'd probably have been thrilled with one of 'em, but two was a disaster because crossfire sucked so bad at the time.

FFactory, you mentioned that enabling the integrated GPU for Haswell only carries like a 10-15W penalty? Any idea how Intel is doing in terms of perf/watt? Trying to bone up on my IGPU knowledge since I kinda feel like we'll be looking more at nVidia vs. Intel in the coming efficiency wars, and even if the perf isn't quite there yet, that is a TINY amount of power to be able to run games as well as they do (considering). It only seems like it's gonna come out of nowhere when they become genuinely competitive because it's so unusual to associate "Intel" with "Actually quite good graphics, really." They've been making amazing improvements, and they really, really nailed 14nm better than anyone else. Their fins are rectangular! Did you SEE how tiny those interconnects are getting? :3:

computer parts
Nov 18, 2010

PLEASE CLAP

Police Automaton posted:

The real magic is how you can make a game that looks like Crysis totally boring.

Real life is pretty boring and shooting Chinese North Korean soldiers is also pretty boring.

Henrik Zetterberg
Dec 7, 2007

k-uno posted:

A quantum computer is capable of solving certain classes of problems vastly more quickly than any possible classical computer, precisely because of the superposition principle. The speedup from a quantum algorithm ranges from polynomial (searching a non-ordered list of N elements for a specific value takes ~N queries on a classical computer, versus ~N^(1/2) on a quantum computer) to sub exponential (factoring large numbers is polynomial on a quantum computer, sub exponential with the best known classical algorithm). The hand-wavy reason for this is that the superposition principle allows you to do calculations with an insane degree of parallelism. Imagine that you have one bit, and you want to operate with some function on both 0 and 1. In a classical computer, this means you have to run the function twice, but in a quantum computer, you can set a qubit (quantum bit) to be both 0 and 1 at the same time, and operate on both values in a single function call. For one bit, this is a factor of two increase, but for, say, 32 qubits, you can arrange a superposition of all 2^32 ~ 4 billion possible configurations simultaneously, and operate on all of them in one step. Now, the catch is that the result you get when you measure the state at the end of the algorithm is random, but if you choose your algorithm cleverly, the "wrong" configurations can interfere destructively making the random result heavily biased toward the problem solution.

I don't think we'll see a real, commercial quantum computer for at least 5 years though (d-wave's machine is more like an FPGA built out of superconductors and whether there's any quantum speedup or not isn't really clear), because the catch is that these superpositions are very delicate, and if the state of a qubit is measured at any time (e.g. through a random interaction with the environment), then it's frozen into one of the values and the superposition is lost. This can be compensated with quantum error correction protocols but they require thousands of redundant qubits to be scalable, and most current experiments are on 5 qubits or less.

Man, I work for Intel and I have no clue what you nerds are talking about.

Rime
Nov 2, 2011

by Games Forum
This explains why Intel is failing to deliver innovation. :smuggo:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

FFactory, you mentioned that enabling the integrated GPU for Haswell only carries like a 10-15W penalty? Any idea how Intel is doing in terms of perf/watt? Trying to bone up on my IGPU knowledge since I kinda feel like we'll be looking more at nVidia vs. Intel in the coming efficiency wars, and even if the perf isn't quite there yet, that is a TINY amount of power to be able to run games as well as they do (considering). It only seems like it's gonna come out of nowhere when they become genuinely competitive because it's so unusual to associate "Intel" with "Actually quite good graphics, really." They've been making amazing improvements, and they really, really nailed 14nm better than anyone else. Their fins are rectangular! Did you SEE how tiny those interconnects are getting? :3:

I could back-of-the-napkin about it...

Haswell GT3 is about 174mm2. That review tests it at both 47W (full chip) and 55W (ditto). Sometimes the extra 8W gives more performance, i.e. when the CPU is being stressed, and sometimes it doesn't. Specific benchmarks are hard to generalize because Gen 7 HD Graphics is architecturally more distinct than GCN and Fermi et al. are from each other. In particular, HD Graphics is extremely shader-heavy, giving results like this:




But also like this:



So there's not going to be one performance per watt. There's also more that makes the comparison apples-to-oranges: that 10-15W is just the GPU (and, for the record, it's an old figure from Sandy Bridge). It doesn't include RAM or power delivery overhead, which can take up to 50% of the board power on a modern card.

So comparing that 10-15W to a generally-similar-performing GeForce GT 640 (GDDR5) at 49W max isn't the whole story. On one hand, if you included the motherboard VRMs and system's DDR3 SDRAM necessary to enable the GPU, you'd likely come up with something a lot closer to 49W than 10W. But on the other hand, the GT 640 also needs the motherboard VRMs and system RAM to produce useful work.

So I'd have to estimate anywhere from "in line" to "great, including structural advantages."

Certainly doesn't have a die size advantage, though - the GeForce GT 640's GK107 is about 118 mm2, about 2/3 the size.

Factory Factory fucked around with this message at 07:03 on Sep 28, 2014

hreple
Feb 11, 2006
hardly
Not to derail on the theory talk, but here goes.

I'm trying to set up a plan for upgrading my PC. I can get a decent amount of money for my old system (i5 2500. 780 gtx and 8 gigs of ram).

I see the 5920K / 5930K benchmarking very well for gaming (which is what I primarily do) - but in the graphics card thread, people kept claming I'm just as well off with an i7-4790K Devils Canyon. I've noticed in hardocp's latest benchmarks that the 5920/30 seems to outperform the i7 here and there, but not by that much.

So - nex X99 or "last-gen" i7-4790K for gaming. I'm planning on SLI 980 gtx and 16 gigs of ram regardless of system. Also - a good motherboard for the devils canyon is greatly appreciated - as I'm having issues figuring out what I want / need. Also - suggestion on memory is appreciated. Been too long since I built something.

I really want something that's future proof and gives me insane fps in 2500 x 1400 res as that's the native res of my 27" gsync-monitor.

LiquidRain
May 21, 2007

Watch the madness!

The 5900 series is a waste of money for a gaming machine, plain and simple, no ifs ands or buts. You pay double or more for the same performance. I'd say Devil's Canyon, ASUS Z97-A, Corsair RAM, and a single GTX 980 until you feel that a single isn't enough. Really the only place where a single 980 GTX wouldn't be enough out of the box is 4k, and you've already told us that's not what you have. Considering you have G-Sync as well, you won't be left in stutter/v-sync/tear hell when the card can't deliver top FPS either.

btw, in gaming, the i5-4690k will match the i7 the majority of the time too. There is simply no reason to go balls to the wall on a PC for gaming anywhere except the graphics card right now, and that's only true if you're pushing 1440p or 4k. If you have a 2500k, unless you are getting a price that lets you pretty much in-place upgrade your base system for real cheap, I'd say just overclock and stay on that until we see what Skylake brings. It is honestly just not worth it.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

hreple posted:

Not to derail on the theory talk, but here goes.

I'm trying to set up a plan for upgrading my PC. I can get a decent amount of money for my old system (i5 2500. 780 gtx and 8 gigs of ram).

I see the 5920K / 5930K benchmarking very well for gaming (which is what I primarily do) - but in the graphics card thread, people kept claming I'm just as well off with an i7-4790K Devils Canyon. I've noticed in hardocp's latest benchmarks that the 5920/30 seems to outperform the i7 here and there, but not by that much.

So - nex X99 or "last-gen" i7-4790K for gaming. I'm planning on SLI 980 gtx and 16 gigs of ram regardless of system. Also - a good motherboard for the devils canyon is greatly appreciated - as I'm having issues figuring out what I want / need. Also - suggestion on memory is appreciated. Been too long since I built something.

I really want something that's future proof and gives me insane fps in 2500 x 1400 res as that's the native res of my 27" gsync-monitor.

It isn't a question of next gen or last gen. Both are the same generation, just two different platforms for two different audiences. X99 is for workstations that will benefit from the extra cores and such.

Even the move from an i5 4690k to an i7 4790k won't produce a meaningful performance improvement as far as gaming goes, as long as you overclock to compensate for the base clockspeed difference between the two - once you overclock the i5 and i7 should both hit the same speed. Hell even the $70 overclocking Pentium can come close to the i5 as far as gaming goes.

hreple
Feb 11, 2006
hardly
Thank you guys, I really appreciate the input.

Seeing as I'm some sort of audio-lover, would a motherboard like the ASUS MAXIMUS VII HERO, Socket-1150 make any sense? I do have an external DAC, and I'm using an optical cable at the moment from the PC to the DAC - but would a better sound card make any difference?

Besides that, I think I'm going to go for an i5 or i7 - and at least one 980. I just like having the possibility to crank everything to max at native res, and I sort of fear one 980 gtx not being able to at that high of a res. My 780 struggles pretty hard, but that's an old cpu as well.

Anyways, really appreciate this. Also -for clocking, what mhz am I looking at for ram? Thinking 16 gigs and I must be able to clock the everlasting hell out of it to be able to feel my epeen maintain it's length.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

hreple posted:

Thank you guys, I really appreciate the input.

Seeing as I'm some sort of audio-lover, would a motherboard like the ASUS MAXIMUS VII HERO, Socket-1150 make any sense? I do have an external DAC, and I'm using an optical cable at the moment from the PC to the DAC - but would a better sound card make any difference?

Besides that, I think I'm going to go for an i5 or i7 - and at least one 980. I just like having the possibility to crank everything to max at native res, and I sort of fear one 980 gtx not being able to at that high of a res. My 780 struggles pretty hard, but that's an old cpu as well.

Anyways, really appreciate this. Also -for clocking, what mhz am I looking at for ram? Thinking 16 gigs and I must be able to clock the everlasting hell out of it to be able to feel my epeen maintain it's length.

If you're using an external dac over optical then it shouldn't matter what the onboard audio is. Any further questions should be directed in the PC Partpicking thread though.

fookolt
Mar 13, 2012

Where there is power
There is resistance

Police Automaton posted:

The real magic is how you can make a game that looks like Crysis totally boring.

If you pretend you're a Predator/Yautja/whatever then it's a pretty sweet game actually!

atomicthumbs
Dec 26, 2010


We're in the business of extending man's senses.
How is the 5820K in single-thread performance compared to a 4790K?

GokieKS
Dec 15, 2012

Mostly Harmless.

atomicthumbs posted:

How is the 5820K in single-thread performance compared to a 4790K?

Basically the same at equivalent clock speeds, since it's the same architecture. The 4790K will probably reach higher clock speeds, though that's a generalization.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

atomicthumbs posted:

How is the 5820K in single-thread performance compared to a 4790K?

Slower, because the 5820K is clocked lower both stock and overclocked.

Knifegrab
Jul 30, 2014

Gadzooks! I'm terrified of this little child who is going to stab me with a knife. I must wrest the knife away from his control and therefore gain the upperhand.
So I am currently waiting on the new architecture from intel to come out, which I hear is sometime next year. Is it a good idea to grab up these new chips or is it better to wait and let their second or third iteration come out?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Are Xeon chips always cheaper than the i7 consumer variants, or does that happen after a while on the market? The 1231V3 seems to be the equivalent to the i7-4770 (4C8T Haswell at 3.4/3.8GHz), but costs less in my favorite computer shop. Sure, it doesn't have integrated graphics, but I have yet to have a reason to use it. Initially I figured it could be used alternatively for physics acceleration, but that hasn't happened, so gently caress it.

KillHour
Oct 28, 2007


It's pretty common that the comparable Xeon is cheaper. i7's are, and always have been for the more money than sense crowd. There is actually a Xeon version with the graphics (E3 1245 v3), and it's still cheaper than the i7.

http://ark.intel.com/products/75462/Intel-Xeon-Processor-E3-1245-v3-8M-Cache-3_40-GHz

The biggest difference is that the Xeon can't be overclocked, while you can get the K version of the i7 if you're willing to spend even more. The real reason to get the integrated graphics is Quick Sync. If you don't need/want that, then :shrug:.

KillHour fucked around with this message at 02:27 on Sep 30, 2014

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
What the Xeon has in cost compared to a regular i7 is erased fairly quickly by the motherboard requirements to actually take advantage of all the extra features. With that said, there's nothing keeping you from using an E3 Xeon on the same socket board as the equivalent generation i7 (because they're basically the same underlying chips with a few features shaved off). Now, you start to feel the burn of the Xeon Intel mark-up when you look at the E5 series and the boards that support those chips.

KillHour
Oct 28, 2007


necrobobsledder posted:

What the Xeon has in cost compared to a regular i7 is erased fairly quickly by the motherboard requirements to actually take advantage of all the extra features. With that said, there's nothing keeping you from using an E3 Xeon on the same socket board as the equivalent generation i7 (because they're basically the same underlying chips with a few features shaved off). Now, you start to feel the burn of the Xeon Intel mark-up when you look at the E5 series and the boards that support those chips.

You sure about that?

http://ark.intel.com/products/77780/Intel-Core-i7-4930K-Processor-12M-Cache-up-to-3_90-GHz
vs.
http://ark.intel.com/products/82765/Intel-Xeon-Processor-E5-1650-v3-15M-Cache-3_50-GHz

Another 3M of cache and ECC memory for free. Motherboards for them aren't ridiculous, either:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182928

Edit:
I'd like to point out for everybody just how far we've come. 6 years ago, $1800 could get you this:
http://ark.intel.com/products/34692/Intel-Core2-Extreme-Processor-QX9775-12M-Cache-3_20-GHz-1600-MHz-FSB

It can now get you one of these:
http://ark.intel.com/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz

:psypop:

KillHour fucked around with this message at 03:37 on Sep 30, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.
The E5 1630 V3 is Haswell(-EP), the i7 4930K is Ivy Bridge(-E). The proper comparison is the i7 5930K, which is $594 compared to $586 for the same number of cores, same amount of L3, same base clockspeed, and a 100MHz faster turbo that makes good as no difference since the i7 can be OCed.

Anyway, the mark-up for Xeons is actually when you get into multi-CPU models. There's no direct 2P analog to the 1630 V3, but going from E5 1680 V3 (8C, 20MB, 3.2/3.8GHz) to E5 2667 V3 (8C, 20MB, 3.2/3.6GHz) is a jump of near a grand ($1080 vs $2057). The 1P models are fairly reasonably priced for the models that have consumer version analogs.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

KillHour posted:

Another 3M of cache and ECC memory for free. Motherboards for them aren't ridiculous, either:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182928
You can find an LGA 1150 motherboard for like $70 with deals, I'm not sure if I want to find an LGA 2011 board that cheap. It's the difference between another 16GB of RAM with $140-ish spread, hardly trivial of a cost for a personal build.

I was mostly thinking like i7-4790 v. E3-1230 v3 v. E5-1620 v3 than the enthusiast lines of the i7 series. Only way to dip below $1000 with the E5 series is with the single socket ones that overlap a lot with the E3 series or with substantially lower core count or TDP chips like the E5-2630L v3. There's hardly any difference among those from a price perspective, but once you go up one more model in all of them, the prices double for the E5 and beyond.

In short, gently caress Intel's constant market segmentation changes.

pmchem
Jan 22, 2010


Combat Pretzel posted:

Are Xeon chips always cheaper than the i7 consumer variants, or does that happen after a while on the market? The 1231V3 seems to be the equivalent to the i7-4770 (4C8T Haswell at 3.4/3.8GHz), but costs less in my favorite computer shop. Sure, it doesn't have integrated graphics, but I have yet to have a reason to use it. Initially I figured it could be used alternatively for physics acceleration, but that hasn't happened, so gently caress it.

When I built my last machine nearly two years ago I got an ivy E3-1230v2 because I did not intend to overclock and did not need integrated graphics. It was cheaper than the equivalent i5/i7 whatever and works perfectly in the same motherboard I would've bought for the consumer CPU. I've advocated doing this for a while now but it never seems to find its way into mainstream system builder guides.

KillHour
Oct 28, 2007


necrobobsledder posted:

You can find an LGA 1150 motherboard for like $70 with deals, I'm not sure if I want to find an LGA 2011 board that cheap. It's the difference between another 16GB of RAM with $140-ish spread, hardly trivial of a cost for a personal build.

I was mostly thinking like i7-4790 v. E3-1230 v3 v. E5-1620 v3 than the enthusiast lines of the i7 series. Only way to dip below $1000 with the E5 series is with the single socket ones that overlap a lot with the E3 series or with substantially lower core count or TDP chips like the E5-2630L v3. There's hardly any difference among those from a price perspective, but once you go up one more model in all of them, the prices double for the E5 and beyond.

In short, gently caress Intel's constant market segmentation changes.

Both of those chips use LGA2011 (Well, LGA2011-3, in the case of the Xeon).

It makes absolutely no sense to compare an E5 to an i7-4790. They're not even based on the same silicon.

phongn
Oct 21, 2006

pmchem posted:

When I built my last machine nearly two years ago I got an ivy E3-1230v2 because I did not intend to overclock and did not need integrated graphics. It was cheaper than the equivalent i5/i7 whatever and works perfectly in the same motherboard I would've bought for the consumer CPU. I've advocated doing this for a while now but it never seems to find its way into mainstream system builder guides.
Costs extra (even if modestly) and won't overclock. Mainstream sites seem to like to be able to do so.

1gnoirents
Jun 28, 2014

hello :)

phongn posted:

Costs extra (even if modestly) and won't overclock. Mainstream sites seem to like to be able to do so.

I believe they usually cost $30-$40 less or so. And it was technically possible to be incompatible with 87 chipset motherboards it very often was (and they usually expressly listed them as compatible too). Really it wasn't a bad idea unless you wanted some of the server-only cpu features I can't remember what they were

Now I just looked and looks like it's $252 for a 1231v3 hyperthreaded. The $60 difference could be a healthy part of the motherboard budget, but it is now 200 mhz slower than the 4790. But if you needed 8 threads but were on a budget too I dont see why not

redeyes
Sep 14, 2002

by Fluffdaddy

k-uno posted:

A quantum computer is capable of solving certain classes of problems vastly more quickly than any possible classical computer, precisely because of the superposition principle. The speedup from a quantum algorithm ranges from polynomial (searching a non-ordered list of N elements for a specific value takes ~N queries on a classical computer, versus ~N^(1/2) on a quantum computer) to sub exponential (factoring large numbers is polynomial on a quantum computer, sub exponential with the best known classical algorithm). The hand-wavy reason for this is that the superposition principle allows you to do calculations with an insane degree of parallelism. Imagine that you have one bit, and you want to operate with some function on both 0 and 1. In a classical computer, this means you have to run the function twice, but in a quantum computer, you can set a qubit (quantum bit) to be both 0 and 1 at the same time, and operate on both values in a single function call. For one bit, this is a factor of two increase, but for, say, 32 qubits, you can arrange a superposition of all 2^32 ~ 4 billion possible configurations simultaneously, and operate on all of them in one step. Now, the catch is that the result you get when you measure the state at the end of the algorithm is random, but if you choose your algorithm cleverly, the "wrong" configurations can interfere destructively making the random result heavily biased toward the problem solution.

I don't think we'll see a real, commercial quantum computer for at least 5 years though (d-wave's machine is more like an FPGA built out of superconductors and whether there's any quantum speedup or not isn't really clear), because the catch is that these superpositions are very delicate, and if the state of a qubit is measured at any time (e.g. through a random interaction with the environment), then it's frozen into one of the values and the superposition is lost. This can be compensated with quantum error correction protocols but they require thousands of redundant qubits to be scalable, and most current experiments are on 5 qubits or less.

Quantum computers in one post. Badass. Thanks!

Adbot
ADBOT LOVES YOU

pmchem
Jan 22, 2010


phongn posted:

Costs extra (even if modestly) and won't overclock. Mainstream sites seem to like to be able to do so.

As per the post after yours, it cost less. I understand some people want to OC and that's fine, I've done it in the past, but that wasn't something I was looking for in my latest build.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply