Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Even 2000 MB/s SSDs don't stand up to DRAM. Dual channel DDR3-1600 is worth 25 600 MB/s peak. Corsair did some synthetic benchmarking of DDR3 vs. DDR4 speeds, and long story short, quad-channel DDR4 systems are playing with around 60 000 MB/s of bandwidth, albeit at slower latencies than dual-channel controllers.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Well the original question was about whether 400 MBps bandwidth is fair for RAM and it's two orders of magnitude off easily is the point when an SSD is pushing an order magnitude higher. Even lower end SSDs do 400 MBps, we'd be in trouble if our RAM was that slow.

Welmu
Oct 9, 2007
Metri. Piiri. Sekunti.
A general manager from Intel said that 10nm chips will launch in early 2017.

Intel then retracted his statement "for competitive reasons".

Rastor
Jun 2, 2001

Lord Windy posted:

I can't wait until we have some new storage that is both RAM and Harddisk. Maybe Flash Memory will one day get fast enough. What does 400mb/s translate to in RAM land? Although 160ms latency is essentially forever in computers.

Instant Grat posted:

Google "Memristor".

A) 400MB/s? 160ms latency? Google NVMe drives, such as the Samsung XS1715. 3000MB/s read / 1400MB/s write and 0.2ms latency.

B) http://www.hpl.hp.com/research/systems-research/themachine/


Edit: I see I was rather beaten by necrobobsledder:

necrobobsledder posted:

The bigger problem I see is that our software written at present is incapable of handling super high speed without rewrites and completely rethinking networking. Here's a good example of what is required to handle the network hardware coming down the pipe at 100 gigabits - it's NOT easy, and ironically enough it's somewhat gated by how fast your CPU can work: https://lwn.net/Articles/629155/

The penalty for getting something wrong in sending data is really severe for network applications and any form of high performance computing historically. So improving the memory hierarchy's latency as mentioned above is likely to provide a lot more throughput than simply doubling that theoretical bandwidth. Sure, bandwidth helps for peak performance, but that's an idealized view of the world. This is exactly how Intel has done so well in the past 10 years - clock speed doesn't matter, smarter cache, smarter branch prediction, more efficient TLBs, etc. have been far more helpful than just blindly scaling down transistors and putting small nuclear reactors into our homes (that won't work well anyway due to current leakage to begin with). The question that's unanswered is whether we'll hit a wall even on how smart we can be about this general programming paradigm. Even multi-core / parallel programming won't save us at a point if what we're doing requires serial processing like what's typical in most games because well... most game programmers aren't going to do threads everywhere just from handling overhead alone and guaranteeing some form of hard realtime guarantees that are what people demand from their games (although nobody does hard realtime in practice I'd say because nobody's going to die if you lost a couple ms worth of frames or something during a CS:Go match).

400Mb/s or 400MB/s? Either way though those are slow numbers for even back in 1999 if it's RAM. Flash is substantially faster for starters. Samsung's newest SSD coming out will go at 2000+ MBps sustained.
Well, in a lot of ways, this is already done on most OSes under the covers of a programmer's APIs because a great deal of calls get turned into mmap on Linux, for example, and that basically memory maps disk onto a memory range for you (among other neat options). So it's up to the OS to do this part. This is just one of the realities of legacy programs and bringing them into the current realities of virtual memory systems.

Rastor fucked around with this message at 18:10 on Feb 5, 2015

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Darkpriest667 posted:

Right, but Samsung is already having some controller issues (or at least I think it's NAND controller issues) which severely degrades the speed of accessing older memory blocks. They said they fixed it but it's rearing it's ugly head again. Basically what we need is large RAMdisks. We need to eliminate storage and RAM as separate and combine them into one thing. That's what I am saying about how programs are accessing RAM. If they were IN RAM and not loaded into RAM from storage. That's the main slowdown. If DDR wasn't so goddamned expensive now because of a B.S. shortage that wasn't even real I'd have bought more this past year. RAMdisk is really nice for loading stuff, but unless DDR4 comes down quite a ways it's really stupid to upgrade unless you need x99 for video editing and computational stuff. I do both so it's double annoying for me. I would never thought I would AGAIN live to see RAM more expensive than my CPU since the mid 1990s, but somehow it will be!



Basically this needs to be way forward yes.

yeah the only significant speed upgrade is going to be when some sort of competitive NVRAM hits the market.

however:

STTMRAM/PCM/FeRAM have density issues

ReRAM (memristors, crossbar) aren't in production so they are theoretical at best

mmkay
Oct 21, 2010

Lord Windy posted:

I can't wait until we have some new storage that is both RAM and Harddisk. Maybe Flash Memory will one day get fast enough. What does 400mb/s translate to in RAM land? Although 160ms latency is essentially forever in computers.

This might be a start to do some reading, maybe?

canyoneer
Sep 13, 2005


I only have canyoneyes for you

Doesn't the joke go something like "high performance computing is the science of turning a CPU-bound problem into an input-bound problem"?

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

Malcolm XML posted:

yeah the only significant speed upgrade is going to be when some sort of competitive NVRAM hits the market.

however:

STTMRAM/PCM/FeRAM have density issues

ReRAM (memristors, crossbar) aren't in production so they are theoretical at best

reRAM is theoretical but it really is our only hope to solve the issue. I have no idea why we even went to DDR4 standard considering, except in a very few benchmarks, RAM speed is not very helpful. Latency is much more important and even then it's really so fast at this point that it's absurd.

HERAK
Dec 1, 2004

Darkpriest667 posted:

reRAM is theoretical but it really is our only hope to solve the issue. I have no idea why we even went to DDR4 standard considering, except in a very few benchmarks, RAM speed is not very helpful. Latency is much more important and even then it's really so fast at this point that it's absurd.

DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

HERAK posted:

DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.

it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them.

r0ck0
Sep 12, 2004
r0ck0s p0zt m0d3rn lyf

Darkpriest667 posted:

it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them.



HERAK posted:

DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.

No one cares about your folding, darkprincess.

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

r0ck0 posted:

No one cares about your folding, darkprincess.

Where is my tiara? the point is efficiency even for power users like people that do compute stuff isn't enough to make a switch from DDR3 to DDR4 if I could get an x99 platform with DDR3 I'd already be in one.

r0ck0
Sep 12, 2004
r0ck0s p0zt m0d3rn lyf

Darkpriest667 posted:

Where is my tiara? the point is efficiency even for power users like people that do compute stuff isn't enough to make a switch from DDR3 to DDR4 if I could get an x99 platform with DDR3 I'd already be in one.

What is your point, would you just state it clearly once and for all?

Josh Lyman
May 24, 2009


Darkpriest667 posted:

it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them.
Is it conceivable that we'd see DDR4 primarily in servers and DDR3 for consumers? My guess is the memory manufacturers would prefer to manufacture only one or the other, but that doesn't mean they can't manufacture both.

From what I can tell, there won't be a DDR5, which means you'll be able to use your DDR4 from Skylake for years after as the industry tries to figure out a successor.

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

Josh Lyman posted:

Is it conceivable that we'd see DDR4 primarily in servers and DDR3 for consumers? My guess is the memory manufacturers would prefer to manufacture only one or the other, but that doesn't mean they can't manufacture both.

From what I can tell, there won't be a DDR5, which means you'll be able to use your DDR4 from Skylake for years after as the industry tries to figure out a successor.

No, AMD is still on DDR3 and Intel won't move it's consumers, that aren't in the HEDT segment, to DDR4 until Skylake (which is what they've sworn for a while now) there were some rumors of UniDIMM but that has mostly died off. Most of the memory makers are not producing nearly what they were 2 years ago. They are trying to clear inventories. I imagine they are scaling down DDR3 production and vamping up for DDR4 production, but not nearly on the scale that they produced DDR3. Mostly because the majority of consumer computer users are now on iPads or another tablet device. Desktops are mostly for gamers and high end users nowadays. This means we will likely not see the low pricing on parts we saw during the golden age of desktops that was 2006-2012. That day is over. Desktop PCs are now more of a niche than the standard.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?

Darkpriest667 posted:

it's a .3V difference. It's less than 5 watts for 4 Dimms. That's not enough of an efficiency for the entire consumer market to be switched to a new standard. Hell, I fold 247 and even for me it's not enough of an efficiency to switch standards. For servers it is a big deal because we're talking about datacenters that have 10000 DIMMs in them.

A couple of watts in a very small form factor system where the target TPD* is ~20 watts is a good gain.

*for the sake of argument

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
One point of note is that Intel's CEO has said that for every mobile phone sold that there's at least 4(?) server CPUs sold to support the services used by that phone. So there is a good chance of market bifurcation. Both markets do demand lower power consumption and mobile device purchases may slow down and server purchases slow as well from sheer oversupply. Novel server boxes like the Dell VRTX or all those random micro servers like Project Moonshot might help with more turnover in datacenters. I'm so waiting to get some of the VRTX boxes decommissioned for a home datacenter instead of the boxes I have around.

canyoneer posted:

Doesn't the joke go something like "high performance computing is the science of turning a CPU-bound problem into an input-bound problem"?
Not sure who said it, but yeah, something close to "Turning compute-bound problems into I/O bound problems."

EoRaptor
Sep 13, 2003

by Fluffdaddy

HERAK posted:

DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.

The DDR4 spec for voltages is pretty dated, remember that the spec was finalized in 2011. We've pushed very hard on performance per watt since then.

Better is the DDR4L (LPDDR4) spec that exists for laptops, it pushed voltage down to 1.05V without sacrificing performance. Desktops could probably switch to SODIMM formats and adopt this spec without any end user impact, but I don't know if this is seriously considered or not. You could make a traditional LPDDR4 DIMM, but I don't think anyone actually has bothered to.

Expect DDR4 to stick around for a long time, though. No one has proposed a spec that solves any of DDR4's problems in a way that is affordable for consumers.

JawnV6
Jul 4, 2004

So hot ...

necrobobsledder posted:

One point of note is that Intel's CEO has said that for every mobile phone sold that there's at least 4(?) server CPUs sold to support the services used by that phone.

If it's the quote I'm thinking of, it was Otellini and it was for every 100 cell phones, 6 server chips were sold. But I can't dig it up because every search I try is talking about what a huge failure he was to not see into the iPhone future from 2005.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I was pretty sure I got that quote wrong, hence the question.

To be even more brutal to the guy, I was writing mobile apps for Windows CE and StrongARM crap back in school in the 2002-2004 timeframe, and even then everyone was talking about how huge smartphones were going to be. I mean, I knew they had neat capabilities, but everything we were trying to do with geolocation and triangulation was terrible then and the APIs just weren't there to make it easy to see how much of an impact they'd have.

I like to think I wrote in class an early version of a Google Maps style predictive map tile loader based upon scroll momentum but all the map software on phones at the time was like MapQuest (incumbent) - click to re-center, repaint, repeat. I just went with an approach that was more intuitive for users. It was really easy on .NET compact (but ugh, I couldn't tell what was actually supported there v. desktop until I compiled the sucker).

So yeah, both Microsoft and Intel lost hard when they had the technology about ready to do something meaningful with smartphones. But oh no, everyone was still riding that laptop money gravy train and enterprise software was selling like hotcakes still.

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

Mr Chips posted:

A couple of watts in a very small form factor system where the target TPD* is ~20 watts is a good gain.

*for the sake of argument


That's absolutely true, however where efficiency matters most is the mobile sector. Intel has been screwing around with efficiency for about 4 generations now and the rest of the industry has basically followed, except AMD who apparently has engineers and marketing sitting around in a room snorting lines of blow and then deciding maybe they should talk smack about Intel and Nvidia and release a product.


The reason we're seeing so much focus on efficiency is because mobile is where the growth is and will continue to be for the future. People are spending 300 to 1000 dollars every few years on a new phone and tablet. A good amount of desktops are from the era before Intels Core and AMD's Phenom processors. It's good for the majority of the market and I guess in a way it's good for us that tinker with poo poo. If a product has the same amount of heat and power thresh hold but becomes more efficient we can push it harder and farther than we pushed things before. That being said it hasn't panned out in the CPU areas. Haswell and Ivy are both poor clockers in IPS but have made good ground in IPC. A 4.5Ghz Haswell is equivalent to a 5.0 Ghz Sandy and most Haswells can do 4.3. (not on default voltage of course.)

calusari
Apr 18, 2013

It's mechanical. Seems to come at regular intervals.


Latest rumor, unlocked Skylake in Q3 after all. Hope it's true.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
is there any hope of broadwell-ep by q3 this year?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

StabbinHobo posted:

is there any hope of broadwell-ep by q3 this year?

I don't think an EP has shipped before an E in recent memory.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

PCjr sidecar posted:

I don't think an EP has shipped before an E in recent memory.

It would be very odd if one did. They may use the same die, but for the EP, Intel has to validate the multi-socket support and possibly do an extra spin or two to fix bugs.

Darkpriest667
Feb 2, 2015

I'm sorry I impugned
your cocksmanship.

calusari posted:



Latest rumor, unlocked Skylake in Q3 after all. Hope it's true.



well we know it's a rumor because there is no way Intel is going to galvanize it's own market of high end gamers by releasing Skylake and Broadwell LGA sockets at the same time.

calusari
Apr 18, 2013

It's mechanical. Seems to come at regular intervals.
there was never a confirmation that there will be any broadwell desktop parts, so there may not be any cannibalization:

65W unlocked broadwell chip could be for AIO

95W skylake unlocked is the desktop part (devils canyon successor)


obviously this is 100% speculation

Mr.PayDay
Jan 2, 2004
life is short - play hard

Malcolm XML posted:

there will be no noticeable difference between ddr3 and ddr4 for the next few years

most of the difference is only noticeable at datacenter scale until ddr4 clock speeds surpass ddr3

In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb? :gonk: :saddowns:

eggyolk
Nov 8, 2007


Mr.PayDay posted:

In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb? :gonk: :saddowns:

No one is safe when it comes to future-proofing shiny computer bits. No one.

Lord Windy
Mar 26, 2010
Does anyone have an article that compares the HD 5500, 6000 and 6100 Iris performance? Lenovo is selling the broadlake CPUs in their new 450s and 550s and I'd like to know how they perform.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mr.PayDay posted:

In other words, my invest in an i7-5930K (overclocked to 4,3 GHz) with 16 GB DDR4 RAM 2666 and x99 Architecture to be "safe" for 4-5 gaming years (and just planning to upgrade the GPUs every 18-24 months) was... dumb? :gonk: :saddowns:

It was dumb because you should never, ever think of a high end gaming system as an "investment" on any level. Instead, you should think of it as lighting money on fire.

You can almost always get 90% of the performance for way less than 90% of the money. From the manufacturer's point of view the reason for the premium line is to extract fat profits from people with deep pockets who have to have the fastest thing, not to give you a bargain deal on something that'll last forever. This is especially true in this case: for 99% of gamers, regular Haswell is a much better choice than Haswell-E. You can get a 4.0 GHz Haswell for $350 or less and it will be every bit as good as that 5930K for essentially all games for the forseeable future. (There aren't many games that need more than four Haswell cores, and there are not likely to be any in the next 5 years.)

Mr.PayDay
Jan 2, 2004
life is short - play hard

BobHoward posted:

It was dumb because you should never, ever think of a high end gaming system as an "investment" on any level. Instead, you should think of it as lighting money on fire.

You can almost always get 90% of the performance for way less than 90% of the money. From the manufacturer's point of view the reason for the premium line is to extract fat profits from people with deep pockets who have to have the fastest thing, not to give you a bargain deal on something that'll last forever. This is especially true in this case: for 99% of gamers, regular Haswell is a much better choice than Haswell-E. You can get a 4.0 GHz Haswell for $350 or less and it will be every bit as good as that 5930K for essentially all games for the forseeable future. (There aren't many games that need more than four Haswell cores, and there are not likely to be any in the next 5 years.)

Thanks for the reply, lesson learned I guess. My local PC dealer even "warned" me, but I think I just wanted to own a native 6 core CPU with nice oc results and be ready for up to a 3- or 4 way SLI system 2016 or 2017, so I won't have to buy anything new.
I hope at least World of Warcraft should benefit. I get up to 190 frames on Ultra settings with CMAA as WoW is still heavy cpu dependant.
Yeah, just trying to find reasons here :manning:

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Mr.PayDay posted:

Thanks for the reply, lesson learned I guess. My local PC dealer even "warned" me, but I think I just wanted to own a native 6 core CPU with nice oc results and be ready for up to a 3- or 4 way SLI system 2016 or 2017, so I won't have to buy anything new.
I hope at least World of Warcraft should benefit. I get up to 190 frames on Ultra settings with CMAA as WoW is still heavy cpu dependant.
Yeah, just trying to find reasons here :manning:

A 4790K would be faster at stock, because WoW and almost every other CPU heavy game doesn't scale with many cores. It just wants the fastest ones possible, and seeing as you're on the same architecture, a 4790K out of the box would be faster for that particular scenario - it turbos to 4.4 anyway. On the other hand, now you've overclocked it, you at least have parity. Parity to a much, much cheaper board, CPU and RAM.

The extra cores are explicitly only for those people for whom time is money. Rendering, video encoding and the like.

You're right that you have a lot of PCI Express lanes, but 3 or 4 way SLI scales very poorly in almost every situation, and thus would never actually be worth it anyway.

I guess you're ready for adding a lot of fast PCIe SSDs, though.

HalloKitty fucked around with this message at 09:52 on Feb 10, 2015

ElehemEare
May 20, 2001
I am an omnipotent penguin.

BobHoward posted:

You can almost always get 90% of the performance for way less than 90% of the money. From the manufacturer's point of view the reason for the premium line is to extract fat profits from people with deep pockets who have to have the fastest thing, not to give you a bargain deal on something that'll last forever. This is especially true in this case: for 99% of gamers, regular Haswell is a much better choice than Haswell-E. You can get a 4.0 GHz Haswell for $350 or less and it will be every bit as good as that 5930K for essentially all games for the forseeable future.
This being said, the last few generations of uArch changes have mainly given us greater efficiency. I'm running an i5-750 that still mostly chugs along well. Do we have any rational expectation that the Skylake uArch changes will have tangible benefits (for the mainstream gamer) that will outweigh the necessity of UniDIMM DDR3/DDR4 upgrades, in addition to mobo/CPU, for single GPU setups?

Rime
Nov 2, 2011

by Games Forum

eggyolk posted:

No one is safe when it comes to future-proofing shiny computer bits. No one.

Unless you bought an i7 920.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

ElehemEare posted:

This being said, the last few generations of uArch changes have mainly given us greater efficiency. I'm running an i5-750 that still mostly chugs along well. Do we have any rational expectation that the Skylake uArch changes will have tangible benefits (for the mainstream gamer) that will outweigh the necessity of UniDIMM DDR3/DDR4 upgrades, in addition to mobo/CPU, for single GPU setups?

Keeping in mind that everything about Skylake is basically rumors right now:

  • DirectX 12 can allow heavy increases in CPU use to allow fuller rendering efficiency, such that there is a marked difference between dual core Haswell and quad core at the extremes of the API's capability. This suggests that the clock-and-uarch differences between Haswell and and your i5-750 (even if overclocked) can make a difference to the worst-case extremes in terms of real frames per second. On a pre-release API with pre-release drivers on a pre-release OS with basically one synthetic benchmark and no actual games yet.
  • Finally getting TSX instructions to work should help the performance of such highly-threaded applications even when they are not especially optimized with fine-precision locking.
  • Moving to 20 PCIe lanes means you can do 2-way SLI while still using a fancy PCIe SSD without jumping to an Extreme board/CPU/RAM. Which doesn't count for single-GPU gamers, but I guess it means you can still have x8 for your GPU even if you load up RAID cards and PCIe SSDs and crazy poo poo.

There are some other rumored uarch differences, but none that look relevant for gaming as long as you aren't rendering on the CPU.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Rime posted:

Unless you bought an i7 920.

Radeon 7970 was actually an incredible buy too, seeing as it's STILL being sold now, and is totally viable with today's games. Up against the later but relevant rival the 680, it still had a 50% VRAM advantage, and when the fight came again in the form of 770 vs 280X, things still held up.

From the CPU end, although Nehalem had a lot of charm (I remember building a friend's machine with a 980X, if that was mine, I'd still rock it today, a 920, not so much), I think it's overshadowed by the Q6600 which had frankly ridiculous overclock potential and staying power. I have a feeling the 2500K will have some outrageous term of relevance now, though.

I guess if you want to throw any kind of bone to AMD CPUs, it's that their old K8 architecture is still toe to toe with their newest stuff, keeping them comparably relevant. A Phenom II X6 is not really much less desirable than any Piledriver.

HalloKitty fucked around with this message at 18:14 on Feb 10, 2015

ElehemEare
May 20, 2001
I am an omnipotent penguin.

So based on current rumors, unless I plan on stepping up multithread dependency of things I run on my home desktop (which is a strong maybe with SQL/Hadoop stuff if I decide I want to bring work home with me even more), decide I need multiple GPUs (I don't, I'm running a 1440x900 single monitor off a 970 :derp:), or plan on putting tonnes of drives into a new setup (I don't, my old rig becomes a NAS for that), Skylake doesn't necessarily afford me any huge improvements over Haswell or Broadwell; or necessarily even current Lynnfield chip (aside from efficiency, but I'm not running a data centre out of my apartment)? Seems like I can wait for Skylake and hop on the clearance LGA1150 bandwagon perhaps. Thanks for the input.

ElehemEare fucked around with this message at 20:16 on Feb 10, 2015

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

ElehemEare posted:

So based on current rumors, unless I plan on stepping up multithread dependency of things I run on my home desktop (which is a strong maybe with SQL/Hadoop stuff if I decide I want to bring work home with me even more), decide I need multiple GPUs (I don't, I'm running a 1440x900 single monitor off a 970 :derp:), or plan on putting tonnes of drives into a new setup (I don't, my old rig becomes a NAS for that), Skylake doesn't necessarily afford me any huge improvements over Haswell or Broadwell; or necessarily even current Lynnfield chip (aside from efficiency, but I'm not running a data centre out of my apartment)? Seems like I can wait for Skylake and hop on the clearance LGA1150 bandwagon perhaps. Thanks for the input.
I literally work at home running all of the above things and I have no real need to upgrade my E3-1230 (i7-2600k equivalent - somewhat faster actually for work). That E3-1230 line has held to performance numbers roughly all within 20%-ish from Sandy Bridge until now with most of the efforts going towards reducing power consumption. While Skylake is one of the more ambitious architectural changes (moreso than Haswell was to Sandy Bridge) I still don't think another 10% more performance would be that big of a deal either.

Adbot
ADBOT LOVES YOU

MrYenko
Jun 18, 2012

#2 isn't ALWAYS bad...

necrobobsledder posted:

I literally work at home running all of the above things and I have no real need to upgrade my E3-1230 (i7-2600k equivalent - somewhat faster actually for work). That E3-1230 line has held to performance numbers roughly all within 20%-ish from Sandy Bridge until now with most of the efforts going towards reducing power consumption. While Skylake is one of the more ambitious architectural changes (moreso than Haswell was to Sandy Bridge) I still don't think another 10% more performance would be that big of a deal either.

For me, moving from Nehalem to Skylake isn't even about performance, its just to get out of my ancient X58 chipset motherboard, and even that isn't because of speed concerns, but because the thing is flat out old. Dead USB ports, it hasn't had a functioning onboard network interface in years, and I really feel like it's the weak point of my machine, currently.

Also; I like new stuff. :v:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply