Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Chuu
Sep 11, 2004

Grimey Drawer
Currently considering a hardware upgrade, since my 8 year old Opteron is starting to show its age. I'm considering an i7-980x for one good reason (I could really use the two extra cores for simulations I run) and one bad one (I'm tired of my work PC being faster than my home PC, the 980x would solve this issue).

If I want the extra cores, is there any reason to wait for Sandy Bridge? For cost comparisons, is it safe to say that performance of the i7-2600 is going to be comparable to the i7-875?

Adbot
ADBOT LOVES YOU

Chuu
Sep 11, 2004

Grimey Drawer

Excursus posted:

It's being launched January 5th, so I guess that's when the official reviews from Anand etc. will come out.

I know this is the wrong thread, but when is Bulldozer's launch and/or anyone know what they're going to leak around the 5th to try to keep people waiting?

I've been sitting on cash for a new computer for almost 3 months now thanks to Sandy Bridge.

Chuu
Sep 11, 2004

Grimey Drawer
I'd be a lot more tempted to shell out the extra $300-$400 for a -E system if Intel's entusiast/mainstream tick-tock didn't mean in 6-9 months I'd probably regret it after seeing the first Ivy Bridge benchmarks.

I wish they'd update the platforms at the same time.

Chuu
Sep 11, 2004

Grimey Drawer

Agreed posted:

Not just profit, though. Thermal/power concerns, production consistency... Could need a stepping in order to get well-functioning and reliable 8-core produced with their current lithography. Don't go right for the "gently caress the consumer!" angle, it's possible that they just couldn't put out 8-core parts reliably or within the thermal envelope or power budget right now.

Another reason, too much cannibalization of low end workstation (i.e. Xeon) sales.

Chuu
Sep 11, 2004

Grimey Drawer
I know the timeline for IB-E is probably a year out, but is there any news at all if IB-E is/is-not going to be socket compatible with SB-E?

Chuu
Sep 11, 2004

Grimey Drawer

nuvan posted:

I would assume though, that if I'm looking to upgrade, am currently on an E8400, and don't care about overclocking, that waiting for Ivy Bridge would be the way to go?

There's a serious argument to be made for jumping on a i5-2500k or an i7-2600k because there are some ridiculous deals to clear out inventory right now. You'd be hard pressed to tell the difference between a SB and an IB CPU once it is in your system.

Chuu
Sep 11, 2004

Grimey Drawer

hobbesmaster posted:

Those are he applications that make the world go round though. Intel probably figures Sandy bridge EN/EP will be fine until haswell so there won't be anything to downgrade to the regular E part. Or something.

When you start pricing systems out there's so much overlap between comparable Xeons and the -E parts I assume they only exist so there is some use for the ultra-terrible bins of Xeons.

Chuu
Sep 11, 2004

Grimey Drawer

movax posted:

Yeah, that ribbon cable :stare: I almost want to buy one of those and slap a probe on it to see how lovely it is. I mean granted it's only six critical wires (TXP/TXN, RXP/RXN, REFCLK+/-), but I imagine the impedance discontinuities/etc are not helping things. The eye probably looks really gross.

I'd be curious since I just bought one of these since I want to move a x1 eSata card to a physical slot below my motherboard (mATX in an ATX case).

The x16 version and x1->x16 versions of these were being used very heavily by the bitcoin mining community to max out the number of GPUs they could put in a single computer. In fact, when trying to find any stats at all about the reliability of these things, I couldn't find a single reference to their use outside of bitcoin mining.

Chuu
Sep 11, 2004

Grimey Drawer

Install Gentoo posted:

Uh, is there actually any evidence that ARM servers were any kind of popular already, or going to be anytime soon? It's a bit premature to cal Intel late.

ARM's stock took a small hit last week when the CEO of EMC commented he saw no future for ARM in the server market.

Chuu
Sep 11, 2004

Grimey Drawer
I agree that this is probably not as big of a deal as most people are making, but it feels like the endcap on the desktop era.

I remember when I was a TA'ing a computer science course and was blown away that none of the freshman knew what the internet was like pre-google. In another couple years I would bet we're going to get the first crop of students who have never owned a desktop.

Chuu
Sep 11, 2004

Grimey Drawer

Factory Factory posted:

Yeah, chipset and iGPU were a big deal, power-wise, in the C2D days. The original Atoms had ~2.5W TDPs for the CPUs, but the chipset (Intel 945GC, latest revision of the C2D era chipset) was a good 22.2W. A 17W Ivy Bridge ULV is matched with a 4.1W or 3.7W HM77 PCH, for a bit less power overall.

Plus I think Intel is starting to put 10W and 13W IVB ULVs to market, anyway. Configurable TDP was always planned to be a feature for IVB, the idea being you could have different levels of performance for Turbo Boost based on battery/plug, workload, user preference, etc. Right now, the rumor mill is suggesting the following:



Speaking of low-power, Intel just beat ARM to market on many-tiny-cores microserver parts with (presumably an all-new) Atom, codename Centerton. It's a Saltwell-core Atom with ECC, VT-x, 8 PCIe lanes, and support for up to 8 GB of ECC RAM per dual-core SoC. The idea behind microservers is that thread-heavy, compute-light applications show much better performance per watt on a ton of wimpy cores than on fewer beefy cores - stuff like content delivery, simple search, and other stuff I'm not gonna pretend to understand.

Minor roadmap update, too: Xeon E3 v3 (Haswell) is being promised in 2013 (which isn't a huge shock, but nice to know), and Atom is going to be synced on process starting then, too, first with the 22nm Avoton, and then 2014 bringing 14nm Broadwell and "next-gen" Atom in the same year. Avoton will probably be based on the redesigned Silvermont Atom core, and there's a good chance that it will beat ARMv8 to market.

I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel?

Chuu
Sep 11, 2004

Grimey Drawer

Bob Morales posted:

And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right?

It's an issue this gen for people who want to run raid1 with SSD caching. You really want everything to be on a SATAIII port.

Chuu
Sep 11, 2004

Grimey Drawer
I can't wait to hear the reasoning as to why TSX is disabled. That absolutely does not seem like an enterprise feature to me, it sounded more like a free performance boost with a recompile for applications with poorly written threaded code.

Anyone know when information was supposed to start leaking? From this twitter thread between Ian@Anandtech and FPiednoel@Intel, THG was not authorized to publish this preview.

Chuu fucked around with this message at 04:24 on Mar 19, 2013

Chuu
Sep 11, 2004

Grimey Drawer

movax posted:

e: One big thing I forgot though, is the upcoming HEVC (successor to H.264 / AVC). That coupled with 4K resolutions could be rough for older hardware that either don't have GPUs/similar hardware to decode assist, or can't be upgraded to have it (i.e. older laptops). Current-gen tablets / portables are of course completely hosed unless their ASICs secretly have the flexibility or IP to handle it.

Just want to add, I don't really care about 4K resolution until I can buy a 4K monitor for my workstation. The way next gen consoles are speced, we're probably not going to be gaming at 4K until the next-next generation, i.e. whatever is after the PS4/XBox 720. Considering Sony & Microsoft are trying to get 10 years out of the console, that is not going to be anytime soon (albeit I would not put money on the PS4/720 actually hitting the 10 year mark).

Chuu
Sep 11, 2004

Grimey Drawer

HalloKitty posted:

This was usually the biggest problem, even some way into the 2000's..

This still happens. Most default configurations of HP Z620 workstations, which start around $1,700, come with 4GB of memory. I've seen dual-processor Z620's configured like this, which means they ship 2x2x1GB DIMMs. What's even worse is that they had configurations of the Z600 that didn't pair DIMMs, i.e. you're essentially running your shiny new $expensive workstation in single channel mode.

I'd like to believe that this is because you'd be insane to pay HP's premium for memory so they ship as little as possible -- but knowing how they work this is definitely not the case.

Chuu
Sep 11, 2004

Grimey Drawer

quote:


McGlockenshire posted:

Not going to happen for a long while yet, don't hold your breath.

They might not be "real" Xeons, but the Haswell E3-1200 v3 series is also to be available on the 4th.

What makes these fake?

If I'm reading that chart correctly, then the E3-1275 is the equivalent of the i7-4770K? I've been needing an upgrade for a while and I'd take TSX over overlocking.

Anyone know if any of the Z87 motherboards support ECC with an E3?

Chuu fucked around with this message at 09:36 on Jun 2, 2013

Chuu
Sep 11, 2004

Grimey Drawer

SpaceBum posted:

From what I see, yes the E3-1375v3 would be similar to a 4770k, but the xeon would have some more featured enabled like VT-d for a VM hosts, and ECC ram support; in case anyone was wondering. I'm not even sure most productivity programs make use of TSX, or will in the next few years.

I'm a developer who mainly works with highly concurrent apps so I really want to play around with TSX. On the other hand, unless my work foots the bill for a dedicated system, I don't really want to give up SLI for gaming at home. If I could get ECC for just the premium of the memory I'd be on board, looks like none of those Supermicro boards would work.

Chuu fucked around with this message at 11:31 on Jun 2, 2013

Chuu
Sep 11, 2004

Grimey Drawer
In the Anandtech ASUS video the rep says that the Z87-WS motherboard supports ECC memory [with a E3 Xeon]. Link here. I can't find any reference to this on Asus' website and from what I know about the Z87, this should be impossible.

Did he misspeak, or will this board really take ECC with an E3 Xeon?

Chuu fucked around with this message at 07:18 on Jun 5, 2013

Chuu
Sep 11, 2004

Grimey Drawer

Factory Factory posted:

The memory controller is on the CPU. The motherboard only provides traces from the CPU to the DIMM slots. I don't know if that's conclusive, but it makes it seem plausible.

Just got a response from ASUS:

ASUS Support posted:

Dear [Chuu],

After checking, if you can install the ECC memory on this board will depend on the cpu you use.
If you use Server CPU, the board can support both ECC or non-ECC UDIMM.
If you use the Desktop CPU, the board only can support non-ECC UDIMM.


Best Regards,

[Rep's Name]
ASUS Product Support Team
[Phone Number]

That is awesome news.

Chuu
Sep 11, 2004

Grimey Drawer

incoherent posted:

Someone, somewhere must of committed to an entire warehouse full of them to justify intel supporting ECC on the i3.

I would suspect it's commercial NAS vendors. i3+ECC is a really sweet spot for both commercial NAS vendors and the FreeNAS folk.

Chuu
Sep 11, 2004

Grimey Drawer
To me, the fact we can build machines with billions of parts on the nanometer scale, work on timescales in the nanosecond range, last a decade or more, and can be purchased for less than a days salary makes this all seem like magic.

Chuu
Sep 11, 2004

Grimey Drawer

KennyG posted:

Let me restate, how big of a deal are the new transaction and other instructions to desktop, virtualization sandboxes, and video encoding?

In regards to TSX, well written threaded code will not benefit at all. Poorly written threaded code can theoretically be improved dramatically with a re-compile or an upgrade to the runtime for interpreted languages. I'd love to see some hard numbers here too.

There is a *lot* more code in the latter category than the former.

Chuu
Sep 11, 2004

Grimey Drawer
I think you might slightly be missing the point. We used to get such great overclocking chips because the micro-architectures were engineered to have lots of headroom. One of the tradeoffs was increased power consumption. Unless Intel starts creating separate architectures for the Desktop/Server market and the Mobile market -- they simply are not willing to make that tradeoff anymore. It doesn't really matter that enthusiast users would rather keep things the way they were; their voice is just not strong compared to Mobile and Server needs.

We're not in a world yet where 'stock voltage overclocking is done' but in many ways it's a failure on Intel's part if we don't get there.

(By the way, I personally love the Anandtech podcast; that episode was not really one of the better examples. Try the one from the week before -- where you have an editor who is willing to engage with Anand rather than constantly backing off)

Chuu fucked around with this message at 06:30 on Jul 20, 2013

Chuu
Sep 11, 2004

Grimey Drawer
Does anyone know when Ivy Bridge EX is going to start hitting the market? There are a couple 3rd party sites with details but I can't find any information on Intel's own site. 24 DIMMs per socket is so sexy for per-core licensed database servers if these CPUs aren't ridiculously expensive.

Chuu
Sep 11, 2004

Grimey Drawer
I know I'm a little late to the de-lidding conversation, but I can't get over how absurd it is that the first step for some people after buying one of the most technologically advanced components in the world is to go at it with a vice, hammer, and two-by-four. It's such a ridiculous juxtaposition.

I've never been tempted by de-lidding since I don't overclock anymore, but if it's that easy, it's so tempting to try it.

Chuu
Sep 11, 2004

Grimey Drawer

Ignoarints posted:

I think whatever physical space it uses would get saturated by heat pretty much instantly. Now if it were being used it'd just create more heat, and if the die was actually smaller I'm sure heat would be more concentrated per *crazy unit of measurement*, but I don't even really mean that. I wondered if leaving it out would leave "room" for design to make the processor gerbils to run better. But I kind of doubt its something as simple as that.

It's significant enough that Xeon's enable higher turbo frequencies when cores are disabled. I don't know if consumer chips do the same.

Chuu
Sep 11, 2004

Grimey Drawer

Ignoarints posted:

Hate to bring this back up, although I'm still not convinced something so small could be considered a heat sink next to something so hot for more than say a microsecond, I am positive that an inactive portion of the cpu itself that simply isn't producing heat would obviously make the whole chip cooler than if it was.

I was trying to find the processor manual where they explicitly say that the reason they allow higher max turbo frequencies on Xeons with cores disabled* was because of thermal effects, but intel's site is a mess and I don't really feel like downloading 200 page PDF's on my connection. A disabled graphics core would work on the exact same principal.

(*Disabled as in, if you have an 8-core Xeon processor almost all server boards will let you run it in 1/2/4/8 core mode; mainly because of per-core licensing restrictions on some enterprise software)

Chuu
Sep 11, 2004

Grimey Drawer

HalloKitty posted:

In 2001. I think that's where they'll stay, sadly.

My school had a Computer Science lab full of these, attached to high end sun and windows boxes. All people ever used them on was to code or to multi-table online poker; since we didn't really have anyone in the CS program doing graphics research. What a colossal waste.

Chuu
Sep 11, 2004

Grimey Drawer

canyoneer posted:

IIRC, they don't allow OEMs, builders and retailers to advertise machines as "Haswell" machines.

A quick search of "Haswell" on newegg shows this not to be true.

Chuu
Sep 11, 2004

Grimey Drawer
edit: Editing out this post because it's too uncomfortably close to revealing info I probably shouldn't; but the time page faults take in HPC is hugely relevant these days.

Chuu fucked around with this message at 04:55 on May 5, 2014

Chuu
Sep 11, 2004

Grimey Drawer

Welmu posted:

Intel has finalized some Devil's Canyon SKU details:



4 GHz base frequency :stare:

Are we getting a Xeon refresh around the same time? 4GHz is the max turbo frequency of best Haswell Xeon out there (E3-1285v3, 3.6Base/4.0Turbo, $662 each)

Chuu
Sep 11, 2004

Grimey Drawer

Rime posted:

^ An i7 920, for example, remains a very powerful piece of hardware for modern gaming. It usually benchmarks not significantly lower than even the 4790k, and is held back only by the increasingly archaic features of X58 and a comparatively silly power draw.

The only reason to upgrade before Skylake, if you are coming from a 1366 or 2011 socket, is either changing your form factor or because your hardware straight up died. If I wasn't downsizing to mITX, I'd run my 920 until the capacitors blew off. Intel has completely and utterly poo poo the bed on delivering a worthwhile upgrade path now that AMD is no longer a threat.

I don't know if I agree with this; from what we know about Haswell-E it's going to be a very sweet platform; especially if you could use the extra cores or your board doesn't support NVMe. I'd be shocked if anything outperforms it for at least a year after it's release. I'd definitely be keeping a close eye on it if I was still running an i7-920.

I might be a little biased though since I do some scientific computing on my home box. Haswell-E and the DC P3500 can't get here soon enough.

Chuu fucked around with this message at 03:42 on Jun 17, 2014

Chuu
Sep 11, 2004

Grimey Drawer

Longinus00 posted:

There should be no need for this if you're just plugging directly into a PCIe slot right?

From what I understand (and I hope someone corrects me if I'm wrong) your bios needs to support NVMe if you want to boot off of it, but if you're just using it as a data drive it doesn't matter.

Chuu
Sep 11, 2004

Grimey Drawer
Tangentially related while we talk about ASUS and audio:

I don't know how much exposure people here have to the audiophile world, but about three years ago ASUS made huge waves when they released the Xonar Essence Once, a DAC+Headamp for ~$500 that outperformed some $2500+ DACs and many sub-$1000 amps. The audiophile community being what it is (i.e. half of it is full of pseudoscience bullshit); people refused to accept that an outsider could really put together such a good piece of equipment -- despite the fact designing a good DAC is probably child's play compared to designing a good motherboard and the fact that we've have tons of great reference designs for the analogue stage publicly avaliable.

You can tell from the feature set that whoever designed this was listening pretty closely to the ground because separate volume controls for line-out (perfect for driving bookshelf speakers) and headphones. This is an absolute godsend for an audio setup that you want to use at your PC; and for some reason pretty much no other dac+amp combo has it.

Chuu
Sep 11, 2004

Grimey Drawer

Rime posted:

I've never really understood: Is there something that a DAC will give me, compared to plugging my headphones into the 80's era JVC Digital Synthesizer Reviever & SEA Equalizer that I have my PC plugged into for sound output? Honest question.

To make a long story short, somewhere in your audio chain there is a DAC, because somewhere in your audio chain digital bits are being converted into an analogue waveform. Odds are though, the DAC is garbage.

Once you have the analogue signal from the output of the DAC, it's at "line level" and needs to be amplified to get it to the volume you want. Odds are, the amp connected to the DAC is garbage.

You don't need to spend much money to get something much better than the default. That pic of the GENE isn't detailed enough to make out the chips but it's probably significantly better than the default since ASUS has shown they actually care about audio quality in the past.

EDIT: A lot of older amps make great headphone amps because the headphone-out and speakers were driven from the same circuit. For whatever reason this started to change and now they're usually on separate circuits -- especially with digital amps. It can't improve the sound quality coming in though.

Chuu fucked around with this message at 04:34 on Jun 19, 2014

Chuu
Sep 11, 2004

Grimey Drawer

Shaocaholica posted:

So....buttcoins?

Trading Firms are going to love these depending on the price point and HDL.

Chuu
Sep 11, 2004

Grimey Drawer
If I'm dropping $1000 on a CPU, $300 on a motherboard, and probably close to $500 on memory obviously I don't care too much about my budget. Why not go all the way then, wait a little longer, and get a Haswell-E based Xeon?

A $1000 CPU that doesn't support ECC and VM instructions because of artificial restrictions seems a little crazy. Still though, I'm really interested in the platform.

Chuu fucked around with this message at 07:01 on Jul 5, 2014

Chuu
Sep 11, 2004

Grimey Drawer
That deal is nuts. I'm going to stop at Microcenter tomorrow to pick one up. I don't even know what I'm going to use it for.

Chuu
Sep 11, 2004

Grimey Drawer
According to the anandtech article, quoting an ASUS rep regarding Overclockability:

quote:

i7-5960X at 4.4 GHz with 1.300 volts is below average
i7-5960X at 4.5 GHz with 1.300 volts is average
i7-5960X at 4.6 GHz with 1.300 volts is above average

If I'm reading the charts on this page correctly, at 1.3V under load the power draw is somewhere around 350W.

Am I misreading something? 350W seems like an insane amount of heat to deal with. Can you do it under 25db?

Adbot
ADBOT LOVES YOU

Chuu
Sep 11, 2004

Grimey Drawer

GokieKS posted:

380W would be asinine for stock, but for an overvolted OC it's not that unusual.

I haven't overclocked in years, I didn't realize that those power draws were typical these days. Definitely puts the 220W AMD processors in a new light for me as well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply