Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
It'd be even better if you could have it pull down the pages directly from Microsoft or kernel.org. Trusted computing, gently caress yeah.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I imagine it'd be handy for storage machines with room for lots and lots of SAS expanders... but with SAS you just daisy chain the fuckers to begin with, so I dunno.

Maybe lots of Infiniband cards? The use cases for so many PCI-e x16 slots are basically only for high-end, high-density servers like what you see in blade-based systems.

I know there's people out there looking for 1TB of RAM in a machine, maybe there's a way to get that much RAM via PCI-e interconnects to RAM.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

movax posted:

It looks like the Q6600 doesn't have VT-d. I don't feel qualified to advise you on a purchasing decision solely based on this feature (you might have better luck in the Virtualization megathread), but part of feels that the higher clocks + hyper-threading of SNB would make up for some overhead lost by lacking VT-d.
The hardware virtualization acceleration features provided by VT-d are of great importance when your VMs are heavily loaded or you're latency sensitive (graphics, animation). Disk I/O is not really affected by these instructions unless you're running some high-throughput I/O on that desktop system. This is part of why it made sense for VT-d to be removed from baseline Sandy Bridge processors - who the heck running VMs at home has those requirements? Granted, I don't think the extra silicon is a huge cost to Intel, but it does force all of us power VM users to move up to Xeon-class systems or stick with the i5/i7 series for the next year.

Despite the minuses, Sandy Bridge architectural features beyond the Core 2 series other than VT-d dropping make up for the difference pretty handily, but I wouldn't be surprised if on certain obscure-to-regular-users benchmarks i7 systems do significantly better than the 2500K and 2600K chips. Overall I'd say there's nothing to be lost going from a Q6600 to a Sandy Bridge chip and that it's not as clear-cut from Nehalem to Sandy Bridge (until those Xeons are released that is).

I'm still probably going to build myself an AMD based compute infrastructure for my home datacenter because I'm a cheapass and Intel's market segmentation wants to run trains on my wallet.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Given how CRCs work and that it's perfectly possible to have a couple bit errors despite CRCs matching in a sufficiently large stream of data over a period of time, you could start getting data corruption silently. This is a dealbreaker for my potential setups at this time and I've put them off while this gets resolved.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Anyone have any word on the LRDIMMs we'll be needing for the server / workstation grade version of Sandy Bridge? I'm seeing some odd release dates of the 20th this month being release dates for some of the new Xeons and am confused about when anything will actually happen from Intel, especially with the Cougar Point issues. I'm looking to build myself a workstation for work purposes primarily and would appreciate some timeline (and an idea of budget) on how much longer I'll need to keep doing all my work on a maxed out upgraded Macbook Pro attached to a NAS.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
General process problems from one business line can ripple into another (eg. server group goes "omg, triple check our poo poo, too!"). Maybe they're affected whatsoever, maybe not.

I'm kind of excited for the SAS controllers getting built into the Xeons. I'll be doing a RAID0 of SSDs and not relying so much upon the motherboard or an extra add-on card for RAID-0 will be a nice change of pace. Now, it does mean I'm going to choose vendor lockin, but I don't think I mind so much here given it's a build once and forget type of machine.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

movax posted:

It's not a process problem,
I should have clarified that in this thread of all places as business process, not fabrication or manufacturing process. Not quite sure what it'll take to keep this from happening in the business lines, but with GPUs and memory controllers and god knows what else merging onto dies, who knows what'll happen in the future?

movax posted:

There are SAS controllers built into the new Xeons? :confused: Are you sure?
Set for Romley which is supposed to be out this year and has been announced to have SAS controllers. http://www.glgroup.com/News/New-Intel-processor-chips-will-incorporate-SAS-controller-50602.html

I'm already biting my nails at the thought of how much LRDIMMs will cost and sulking back to FBDIMMs for my 12GB/16GB setup (depending upon the feature and cost matrix at release). I don't need the LRDIMM features for a stupid workstation, but on the other hand I do need to have the sort of configuration a customer might have in their datacenter when I'm doing some performance analysis for some stuff I'm writing now off the clock.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Now that's a board for my file server. Except enthusiast boards seem to carry a bit of mark-up for aesthetic reasons or something beyond even the server boards historically.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Looks like the Xeon E3-1220L isn't in stock... anywhere. I can't find the sucker anywhere. It's definitely in the ARK but I guess it hasn't made its way into distribution yet. The 1220L is significantly different from the E3-1220 in TDP and cache, so it's great for home file servers and the like where you care more about power usage than the performance. I don't think any of these Xeons include Romley support since that'll be on LGA1356 and LGA2011 (I seriously wonder if they picked LGA2011 for the year of release instead of for technical reasons). I believe these chips don't include the SAS controllers that the later Xeons will have.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

movax posted:

LGA2011 looks pretty pro though, can't wait to see people with entirely too much money start building some machines with that. RAM is ultra cheap too, 6x4GB go go go (I bought 2 3x4GB kits for my server recently, incredibly cheap).
I thought you'd need to buy some newfangled DIMMs other than the usual ECC DDR3 DIMMs? I was holding out for LGA2011 but stuff came up and I needed a machine ASAP. A Romley based setup would have been awesome for a low-end (for "enterprise") file server given what I have with my Sandy Bridge machine. I'm still shocked at how little power my E3-1230 uses though - system total is like 40w idle.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

freeforumuser posted:

Yeah, its retarded that the 2600K has VT-x but not VT-d despite being the top-tiered CPU for its socket, and P67 chipset should never had existed in the first place. But on the chipset front, AMD had a lot more pointless chipsets this gen and the last.
That's Intel's market segmentation tomfoolery that's messing with you. I decided I'd rather just get a Xeon instead of going down to an AMD machine that'd gulp up a fair bit more power and perform worse for most of my tasks throughout its life. The costs of going Xeon aren't all that terrible if you're looking for virtualization features as any bit of a concern - you should probably be on a Xeon already anyway. Then there's the HCL for VMware ESXi that bugged me and I'd want Intel NICs and server grade (haha, yeah...) SATA controllers which would raise the costs for me to get the system up to a functional level. I put together a fairly badass Xeon system with a GTX 560 and 128GB SSD (about 25% of the cost!) for about $1100 in the end and I'm fine with that cost, especially since I can write it off on my taxes anyway.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
A Xeon E3-1230 is ~$240: http://www.newegg.com/Product/Product.aspx?Item=N82E16819115083
2x4GB ECC DDR3 RAM ~$80: http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262
C202 motherboard (admittedly the cheapest of the bunch, but I didn't give a drat about SATA3 for my needs) $160: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182252

So comparing costs vs. a 2600k, which does get slightly better performance:
$314 v. $240 = -$74
2x4GB DDR3 I just bought for $42 = +$38
H67 motherboard from Intel is $130 = +$30
----
-$6 - oh really now?

So... basically for the same costs here you trade off motherboard features, a tad bit of CPU performance (but the E3-1230 has lower idle than the i7 if power efficiency is a bigger concern like it is for me) and are locked into paying for ECC RAM that's about 80% more expensive... when it's one of the cheapest parts of a modern system.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

HalloKitty posted:

Transcoding seems like such awful nonsense, and I hate that devices don't ship with more codecs with hardware acceleration.
I think people wouldn't mind transcoding as much if it was done on-the-fly when syncing media. If someone could write a plugin for iTunes that will intercept files added, recognize they can be converted, and trick iTunes into reading different headers, it could be possible for syncing to a device to trigger an on-demand transcode of the media files.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

calcio posted:

Is the DoD really buying high end Asus motherboards and not Dell, etc.?
The reference designs will be picked up by HP and Dell and if there's none that don't have a PS/2 port, they'll scream at Intel to put it back on because neither HP nor Dell want to pay for the engineering effort to add them back onto the reference they just briefly look at and send to Foxconn to build.

Also, DoD does have the ability to just disable mass storage drivers for USB completely on all their physical machines that have access to a classified network, it's just that they'd rather just kill the USB stack entirely than bother trying because the process to excise just the mass storage drivers and scripts would be too much effort for the lazy pension-seekers and paper-pushers want to deal with.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

dpbjinc posted:

It's not a major issue, since it only affects non-server systems that need full disk encryption, which are a vast minority of systems out there, but it is something to consider for organizations where that would be an issue (i.e. DoD and friends).
Just FYI, but major companies in private sector are starting to mandate full disk encryption, including HP, for example. I know several major financials have rolled out such policies or are in the middle of preparing for such policies. IT in big companies is budgeted primarily to protect the company from sinking or losing money, not to make people more productive. They'd rather have everyone spend another 10% of their day slowed down by their computer than losing one laptop with some random projects of no consequence to a competitor because.... their bizdevs have no idea how to calculate the losses so it's unanimously "invaluable".

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The warning I have is that with UDIMMs, they rapidly become more expensive than RDIMMs on those C20x boards once you start looking for 8GB sticks. It's fine if you're ok with 16GB of RAM ever on a machine, but if you're looking for 32GB or more, you're kinda screwed. That would also mean you'll now need to be looking at E5 Xeons and different motherboards starting around $190. Then there's also the downclocking "feature" of Nehalem with using so many UDIMMs on a board (wish I could find the most proper paper showing how it works, but I believe this is inherited from Nehalem) and all of a sudden RDIMMs look like a wonderful bargain.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Aquila posted:

I'm not convinced you could find any real world workload that would actually save you money, rackspace, power, or whatever your most important thing to save by using Atom cpus.
This is pretty consistent with how the power efficiency gains of an XBox cluster was not worth the cost savings over a comparable high-end Xeon or Opteron cluster for the time period. When you optimize for just plain throughput, more often than not power is saved as well in modern designs just on the principle of keeping all units occupied at all times. Furthermore, compute clusters are run at full-speed typically and the power efficiency of low-power CPUs mostly show up at really light workloads while for maybe another 800% in power you can get 1200%+ greater throughput in most server centric CPUs. Most of the low-power cluster companies and divisions out there are using ARM due to higher efficiency than Atoms and the general availability of various designs and processes for the SoCs instead of just a single vendor.

The primary way I can imagine ARMs having staying power in the server market would be as directly executed or virtualized mobile workloads in a datacenter. The shift by clients to move to ARM CPUs in practice is pretty scary. What's funnier is how ARMH has performed mediocre despite a huge vested interest in so much of the industry in their SOC designs.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I can only imagine how much backlash all the system integrators / OEMs could give Intel because this basically forces them into really close relationships with Intel. Unless Intel can articulate how this can be beneficial to both consumers and their partners, this seems like a really brazzen land grab to squeeze out any OEM that isn't in Intel's pocket already. They've already done a number of eyebrow-raising things with artificially disabling various hardware features for market segmentation reasons (another way of basically screwing over anyone trying to get value out of a company's offerings without making enough people angry enough to cause full-blown outrage).

This is getting dangerously close to something that could be prosecutable... if the manufacturers of motherboards were all in the US. Because international capitalism competition is basically "lol, gently caress you Imma gonna take over your market" there may be no legal ground to stop Intel from doing this. International market regulation is a bit of a murky ground that's mostly defined by trade agreements as far as I know (and beyond really broad strokes like "free trade" v. "protectionism" there's only so many things that keep companies from curbstomping each other across the world.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
-edit for bad posting-

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
If your use cases are VMware Workstation, there's probably little value in VT-d for you (aside from the direct PCI device access bit which is pretty snazzy for, say, a GPGPU cluster or direct access to a weirdo piece of hardware). Raw device mappings exist specifically optimized for storage use cases though while VT-d makes it an arbitrary device. The features starts to matter more when you've got a dedicated, higher consolidation ratio setup with lots of VMs that can put a hurt on low-latency operations like IRQ banging and DMA remapping. VT-d is basically mandatory too when you're doing desktop virtualization scenarios and you'll have dozens of users banging away on a remote console connection on the same server.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
To put it another way, the rich become richer and the wise and powerful use their resources to maintain their power. Only the fool-hardy sit on their laurels.

Oh, and if Intel's targeting mobile as seriously as they seem to be pursuing with APUs, this will start to push into mobile's price segmentation system (read: low pricing) then Intel intends to either push down pricing aggressively or to try to expand the current mobile market's low margins (for OEMs rather than the integrators like Apple and Samsung) to grow the business with their market position.

So... this means AMD is basically screwed because they'll be only able to serve a market with their APUs that's just not growing (people are mostly fine with their laptops because they're just used for doing work in MS Office if the user doesn't have a tablet already or they're basically poor and that's their sole machine). AMD's best bet here may be to go after developing / emerging markets but uh... that's where mobile has already taken a foothold (many remote African and Indian villages get cell coverage and are using SMS and last-gen mobile infrastructure tech).

Perhaps AMD may be able to find a place in industrial designs or something where people look aggressively for low-cost parts but don't really care that much about power efficiency compared to being able to, say, withstand extreme conditions. Too bad that's an entrenched market overall with massive fragmentation, but maybe they're big enough to muscle out boutique CPU builders.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most of us that have been looking at building serious servers at home for things like CUDA and Hadoop clusters are aware of Intel's market segmentation for PCI-E lanes and basically got cornered into using Xeons and their expensive motherboards.

mayodreams posted:

I wish I had waited a bit because the V2 line came out like 2 months later, and had a SKU at the price point I wanted that had the GPU integrated, because I have to use a crappy GF 430 now on my Virtual Host.
I basically built your machine, but I had a GTX 560. I say had because as I was reinstalling it after a power consumption test, I heard a wonderful zap while picking the card up.

If you ever intend on making your machine a mostly-headless server in its lifetime, I'd recommend shelling out an extra $25 on the GPU-integrated Xeons mostly because losing a PCI-E slot on a server motherboard can be tough, and lots of server-class motherboards don't even have a PCI-E x16 slot unless they're ATX boards. I was in a rough spot with an Intel microATX motherboard that had a PCI-E x16 slot... and almost no other uATX motherboard on the market has a PCI-E x16 slot it turns out.

Combat Pretzel posted:

If I want a mainboard with actual VT-d support, that is not just on paper but then leaving it off and out of the BIOS (I'm looking at you, Asus), I'm guaranteed to get this if I go with Intel? Or does that also depend on the mainboard model, even tho the chipset does support it? I'm currently loosely planning a Haswell build (need some quotes to plan my budget on).
You need to match CPU and the motherboard needs to have a BIOS that will support the CPU features as well even though the feature is centered at the CPU. For example, if you used a desktop LGA1155 motherboard and had an LGA1155 Xeon, your ECC would almost certainly be disabled or in best case, not reported correctly to a number of programs, including MemTest. Even a Z77 motherboard likely won't support ECC. There's a thread running around where people were mad at Asus for a desktop board that didn't properly report the ECC feature to memtest. Historically, it was Asus that would support ECC anyway on their motherboards but as of Sandy Bridge motherboards, this feature has since disappeared (noted by many of the posters in the thread as to why they bought Asus in the first place).

The worst part at present is that even AMD's dropping ECC support for their reference motherboards when it was one of the remaining few reasons to consider an AMD build for a home server instead of an Intel box. So you're forced into Opterons, which will support RDIMMs and pricing starts looking close to Xeons as well, diminishing another reason to bother with an AMD machine.

Basically across the market, UDIMM support is under attack and I speculate that they may be phased out entirely in perhaps 4 years due to the widening divide between server and client side computing requirements as well as the stagnation of client-side requirements in software.

I found out a lot of this when I was scrambling to find a different board than my current Intel S1200BTS board thinking it had fried and found some grim things about the state of trying to do server-class capability setups on a home budget.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Alereon posted:

One critical thing to remember: in a world where all code is executed with a JIT compiler on the device, it doesn't really matter what instruction set your processor is using as long as it's supported by the compiler and the product does what you need.
The (rather glaring, and important) exception here is mostly driven by iOS honestly since Android has won otherwise in the mobile space. But for a great deal of cloud services, they're running on stacks other than the JVM. Ruby's various VMs are only so mature (it took how long for the gc to be optimized to where Java's was in 2001 again?) and .NET is all that remains to round out the remainder of backend stacks without counting the obscure stacks based around node.js and Erlang basically. So really, runtime application VMs are only so useful on different platforms and different ARM variants' JVMs are nowhere near as optimized and scrutinized as it is on x86 partially due to the fragmentation and breakneck pace of ARM ISA development in the past several years.

Apple may in fact be the biggest barrier to Intel. Apple has shown it's not too afraid to switch architectures historically, but the current Apple is not necessarily the Apple we've all known for decades. They may, in fact, just stay on x86 for laptops just due to the sheer amount of software that targets x86 and OS X now and that position is not anywhere near the same sort of position Apple's had in its history (m68k, PPC, etc. were all niche or part of a fragmented ecosystem, let's face it).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I think we'll see what sort of long-term strategy Apple is going to commit to depending upon what their supposed update to the Mac Pro is. It seems a bit obvious to me that Apple isn't going to enter a thin-margin business of any sort, and that includes a number of cloud services that everyone tends to buy. While iCloud is an obvious exception here, the goal of the service is to keep people invested in buying more iDevices.

On the other hand, going back and refining existing products doesn't produce the kinds of margins that reinventing a market entirely will do like an iPhone or iPad, so maybe they'll just keep the "make it thinner and prettier than competition" strategy that's their normal procedure. Apple may be in a bit of a lull now but if they manage to solve the living room content delivery and unification problem that's one of the toughest to solve not due to technical reasons but political ones they may be able to recover all of their recently shed share prices and then some as well as cement themselves as the most valuable company in America. That'll kind of be a sad day to me, but business is business, can't really attach sentiment to it.

Apple's probably working on ARM laptops for the same reasons they kept an x86 OS X build around for years - business contingencies.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Apple having an enterprise desktop strategy makes only so much sense given they abhor dealing with the usual BS of enterprise software with armies of consultants and architects. While they do appreciate the business, they're just not looking like they're going to suck enterprise dick like IBM, HP, Unisys, BMC, CA, ad infinitum. They're not exactly rolling out iOS Management Solution Suite either for their already-successful entry into the enterprise supply chain. I really don't think that such developments would attract the sort of employees Apple wants either.

BYOD is just not going to go the same route as unregulated companies like Intel over the next 15 years. I'll suck a mean dick and swallow with a smile if hardasses like all the financials and healthcare verticals let you just buy a random laptop and put it on the network no problemo from their IT.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Misogynist posted:

I think this is the same thing that everyone was saying back around 2007 when the consensus was that they would never support Microsoft Exchange on iPhone.
Curious, I was one of those people that thought Exchange client licenses would be low-barrier enough for increasing adoption among business users that didn't want a work phone + personal phone. In any case, putting much more effort into laptops / desktops than absolutely necessary is kind of in opposition to Apple's "post-PC era" product strategy at the enterprise level and all.

bull3964 posted:

Someday i will have my dream of an ~11" notebook with a 2650x1600 screen 6+ hours of battery life and the ability to run most games at decent settings at 1080p.
That's some rather low set of aims for "dreams" don't you think? As a consumer, I'm hoping for cheap-enough devices where I can just borrow someone's machine for free or less than a cup of coffee, get immediate, on-demand access to all my crap from the middle of nowhere in about 2 minutes, go run a bunch of backend software and do a day of work or so, and have a solid measure of guarantee that none of my information's been compromised to anyone that doesn't have bigger problems than attempting to manage it all. Oh yeah, and I get a personal screen like Google Glass I can hookup wirelessly and get at least 60 fps everything at 4K or better resolution. Somehow all this will have to be profitable for the people running things while I don't pay for much at all and not give me brain cancer in 5 years.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Those virtualization improvements are giving me a nerd boner. Next thing they'll add is GPU virtualization extensions with Broadwell beating nVidia to the punch and I'd have a small nerdgasm.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Tab8715 posted:

I just want this so I could run OS X in a VM :smug:
Works for me under VMware Fusion :v:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
It's been a bit of a long slog for external GPUs, but the biggest problem I've really hit with getting rid of PCs completely while still scratching a bit of a gaming itch has been that Macbook Pro Retina storage is so expensive and Bootcamp doesn't work off of external drives (for reasons beyond Apple's control it seems). Instead of paying $800+ extra for another few hundred GB AND a Thunderbolt dock or something AND a GTX 680+ GPU (Retina displays could use that power), I could have bought a fully loaded mini ITX box and a copy of Windows 7. Beyond that consideration, there's the fact that the Thunderbolt PCI-E bandwidth is limited to only 4 PCI-E 2.0 lanes at this point and seems locked to hardware, so it doesn't even matter if I had a nuclear fusion reactor backed GPU. Thunderbolt must drastically improve for the mobile high-end gaming (max details, native resolution - not possible really given the Retina resolution on a GT 650M) Macbook Pro concept to be viable.

On the other hand, I'm very interested in the Steam Box as a result but don't see myself having that much of a blast with the sort of GPUs that could work in that kind of chassis. Until things converge about right, I'll just play the few games I do run under OS X.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
This isn't 1997 where the processor next year would actually get you 20%+ more performance out of your applications. I'll be sticking with my MBPr and the Xeon E3-1230 in my server for a good long while unless I start needing lots and lots of hardware.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I've been hearing one way or the other about the simultaneous DDR3 / DDR4 support in Haswell not making it until a certain roll-out. Some said it'll be out with the E3s, some with the E5 series, and some say it'll be with the EX only. Anyone have a definitive answer on the timeline for DDR4 in Intel chips besides the Wikipedia page saying EX confirmation and the reports from last year on EP getting DDR4?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
In that market (E5450/X5450), Intel is absolutely going to charge extra to lower the TDP because it's a major, major aspect of buying decisions for enterprise server OEMs and resellers. Somehow at the same time most of these customers don't bother thinking about lowering the power supply wattage rating to get increased power efficiency (I know Google and Facebook have customized PSU designs with surprisingly few parts that feed off of DC instead of AC, for example, but I don't consider them traditional enterprises in any manner besides budgets). When you're buying a crapton of blades at $10k+ / pop for several chassis filled, the $75 more is a drop in the bucket. This isn't to say that I believe it's actually cost-effective to buy such things (I'd say to focus on cutting down your likely stupidly bloated software on horribly tuned DBs and JVMs running GBs of dead code), but the point is mostly that Intel's getting margins out of this when consumer CPU margins are getting increasingly worse for the engineering dollars invested. I think of it as enterprise-aimed DLCs (read: "value-adds").

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
So looks like there will be Haswell-ready PSUs by the launch, hrm http://www.seasonic.com/new/twevent20130510.htm

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'm still skeptical of people claiming 20+ hr runtimes on laptops given more than half the battery use is from the LCD backlight rather than CPU. Then there's the wifi to worry about too. But I'm very curious to see if x86 mobile will actually be viable soon. My gut feeling tells me that it'll be worthless without a cross compiler or migration kit for most developers as well as a viable iPod/Phone contender an since Microsoft failed here uh... I doubt Intel could do it. Perhaps revitalizing notebooks and making them really attractive again is the main strategy and mobile is a backup plan. This seems inverse to the nVidia strategy.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I remember seeing the iPhone announcement and Steve Jobs hoping they'll get 10% of the smartphone market on a slide while I laughed going "Wow, that's ambitious, they don't even have 10% of desktop or laptop sales." I didn't realize that Nokia, Blackberry, and Palm would fall flat on their face versus the iPhone 3G and that Google wasn't quite ready with that Android acquisition from years before. Now that's a CEO underpromising and overdelivering if you can see it.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The E3s are basically i5/i7 CPUs with (unregistered-only) ECC and VT-d support plus a few other enterprise-class features that are downstream from the actual server CPUs starting with entry level E5 processors that are actually dual socket capable. What's funny is that most E5s wind up in servers and could probably use the integrated graphics while the E3s usually wind up in workstations with Quadros and Fire GPUs (not to mention the price segmentation and such), so it makes more sense to load the E5s with low-end integrated GPUs (and steal some revenue from AMD given the tight integration with the chipset to support onboard graphics and all). The memory controller being that full-featured is exactly why R-DIMMs are byte-for-byte cheaper than UDIMMs as well (the DIMM and the chipset have more responsibility without some dedicated supported from the CPU).

I'm really curious about the E3s getting TSX support though because that'll make a huge difference for people trying to muck about with high-throughput transactions beyond just databases not to mention few developers want to bother shoving their crap onto a production server to test all these fancy new instructions out. Then again, most of the devs involved with the major stuff probably have early access to Haswell and TSX-producing compilers from Intel.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
From what I've read, the follow-up to Haswell (Broadwell) will be BGA-only and be directly sent to OEMs and so those building aftermarket like most of us in SH/SC do for our home systems will need to wait until Skylake to whitebox. Because this is all a matter of forcing buyers to go directly from the OEM to pair CPU and motherboards (streamlining for costs and to trim logistics expenses while letting Intel pursue their mobile strategy) I think that they'll mark up Xeons even more with Broadwell that are still socketed or we'll all be forced into buying expensive marked-up Dell, HP, etc. servers instead of whiteboxing our own. There's evidence Intel is trying to squeeze a bit more life out of socketed E3s at least via more market segmentation AKA abusing market position (the Haswell E3 Xeon's LGA1150 is not compatible with Z87 LGA1150 sockets) so we may see something else to ruin our day as prosumers. The E3 is a final place where Intel could really put the screws on those of us with professional needs but on non-Fortune 500 budgets and since I don't think AMD will exactly step up to the task (they got rid of ECC support on newer CPUs unless they're Opterons) there's hardly a choice now without losing ECC on buttloads of RAM.

My current workstation / server is an E3-1230 Sandy Bridge system I built in 2011 and it's remarkable how well it's held up through Ivy Bridge. Maxed out at 32GB of RAM and while the clockspeed gets halved as a result (fuckin' UDIMMs), it really doesn't matter for most people's jobs until you start to get into supercomputing sort of workloads (there's a paper some guy running a big cluster with COTS hardware wrote where they actually did notice a substantial improvement in their job throughput with higher clocked RAM). I'm going to anticipate that Haswell will be the last E3 I can whitebox for a Xeon system though and with Intel stopping at 32GB of UDIMMs (not trying to push for DDR4 UDIMMs of 16GB+) for Haswell, they've identified and put their foot down on the memory boundary between workstation needs and "you should just run it on a server." It'll be peculiar to shove my current Xeon into a gaming setup in another year or so when it'll still do great on lots of titles, it's kind of amazing to think that I could have a good chance of using a CPU for gaming for 3 years without complaint as opposed to 10 years ago.


So uh... how about them M.2 slots in those Z87 motherboards? (Nevermind that there's no M.2 SSDs available still)

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's hardly any information about Haswell E3s at this time besides that they will exist and that there's some physical incompatibilities noted in current releases to the press. There's still no certainty that they'll even have TSX enabled, which is the #1 reason for developers to care about Haswell for their workstations (hardware transactional memory is serious business). If they don't enable that, then you're forced into buying an E5 with RDIMMs and $200+ motherboards and all that while being unable to test your stuff locally. E5s are not in the same category of prosumer as an E3 is - they're what is typically marketed for low-mid end servers for OEMs starting around the $1200 range with stupidly overpriced RDIMMs and laughable storage choices. This just makes everyone more obligated to go talk to a Dell or HP sales rep to get discounts to make prices somewhat sane, and again that just makes the sales channels easier to manage which reduces customer acquisition and maintenance costs.

I'm going to hold off on any upgrades to Haswell at this point given it's an early adopter thing as well as my need to reduce stock of everything moreso than to acquire even more crap that I don't really need. Nothing I do professionally at this point would benefit from Haswell except maybe a year or two in the future, so I have little need to buy it now anyway.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Haswell is hardly as much of an improvement in the bleeding-edge performance segment as much as low-power efficiency that's relevant for laptops and tablets where Intel's trying really hard to put their foot in the door before they're relegated to the "the server CPU company" with only Xeons to sustain them basically.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Bob Morales posted:

Are the Atoms being used in laptops/netbooks anymore or just stuff like the Surface RT?
For one thing, netbooks just aren't being made anymore basically with AMD having taken that market away from Intel (and gladly by them almost) given the poor margins and Intel's focus upon ULV mobile to respond to the iPad + iPhone ARM onslaught (I am not a believer that Intel has won from the cloud revolution unlike some writers / visionaries out there). There's been a number of Atoms pushed into the now-saturated / moribund home server market in recent years shifting from the really esoteric CPUs (including MIPS) that a lot of NASes used just out of being cheap but Atoms are hardly having the focus they used to have in, say, 2008 with the netbook craze. I find it kind of hilarious that AMD kinda randomly showed up and won here when people were focusing upon nVidia (remember the nVidia Ion series?) v. Intel for so long.

[quote=""movax}With both companies having their sights set on ~10" tablets/portables and trying to compete against ARM, we'll be seeing movement towards the power corner vs. raw clocks/performance/etc I think (R&D dollars have almost certainly been targeted more towards thing that can bring down TDP vs. exploring architectures)[/quote"]I still think the tablet game is basically over though without a way for developers to port everything over to x86 quickly. Apple may ironically make x86 mobile rather relevant if they release a solid iOS for x86 tool suite something similar to Rosetta in pursuit of their "iOS ALL THE THINGS" strategy that's highly controversial. The big winning move that could be made with x86 tablets requiring minimal cooperation from the Android / Apple side is if a hypervisor could be started that lets users run iOS, Android, Linux, Windows, etc. on their 10" tablets. That's been awful quiet for a while when I last saw Xen making some moves to do it in 2009 for laptops. ARM virtualization is hardly something at the top of vendors' agendas it seems in favor of baking in user profiles instead while x86 virtualization is basically oldhat now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply