Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most of us that have been looking at building serious servers at home for things like CUDA and Hadoop clusters are aware of Intel's market segmentation for PCI-E lanes and basically got cornered into using Xeons and their expensive motherboards.

mayodreams posted:

I wish I had waited a bit because the V2 line came out like 2 months later, and had a SKU at the price point I wanted that had the GPU integrated, because I have to use a crappy GF 430 now on my Virtual Host.
I basically built your machine, but I had a GTX 560. I say had because as I was reinstalling it after a power consumption test, I heard a wonderful zap while picking the card up.

If you ever intend on making your machine a mostly-headless server in its lifetime, I'd recommend shelling out an extra $25 on the GPU-integrated Xeons mostly because losing a PCI-E slot on a server motherboard can be tough, and lots of server-class motherboards don't even have a PCI-E x16 slot unless they're ATX boards. I was in a rough spot with an Intel microATX motherboard that had a PCI-E x16 slot... and almost no other uATX motherboard on the market has a PCI-E x16 slot it turns out.

Combat Pretzel posted:

If I want a mainboard with actual VT-d support, that is not just on paper but then leaving it off and out of the BIOS (I'm looking at you, Asus), I'm guaranteed to get this if I go with Intel? Or does that also depend on the mainboard model, even tho the chipset does support it? I'm currently loosely planning a Haswell build (need some quotes to plan my budget on).
You need to match CPU and the motherboard needs to have a BIOS that will support the CPU features as well even though the feature is centered at the CPU. For example, if you used a desktop LGA1155 motherboard and had an LGA1155 Xeon, your ECC would almost certainly be disabled or in best case, not reported correctly to a number of programs, including MemTest. Even a Z77 motherboard likely won't support ECC. There's a thread running around where people were mad at Asus for a desktop board that didn't properly report the ECC feature to memtest. Historically, it was Asus that would support ECC anyway on their motherboards but as of Sandy Bridge motherboards, this feature has since disappeared (noted by many of the posters in the thread as to why they bought Asus in the first place).

The worst part at present is that even AMD's dropping ECC support for their reference motherboards when it was one of the remaining few reasons to consider an AMD build for a home server instead of an Intel box. So you're forced into Opterons, which will support RDIMMs and pricing starts looking close to Xeons as well, diminishing another reason to bother with an AMD machine.

Basically across the market, UDIMM support is under attack and I speculate that they may be phased out entirely in perhaps 4 years due to the widening divide between server and client side computing requirements as well as the stagnation of client-side requirements in software.

I found out a lot of this when I was scrambling to find a different board than my current Intel S1200BTS board thinking it had fried and found some grim things about the state of trying to do server-class capability setups on a home budget.

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

necrobobsledder posted:

Basically across the market, UDIMM support is under attack and I speculate that they may be phased out entirely in perhaps 4 years due to the widening divide between server and client side computing requirements as well as the stagnation of client-side requirements in software.

That's probably a symptom of Intel jumping to DDR4 with Haswell Xeons. DDR4 is abandoning the banks-of-RAM-per-channel approach and moving to point-to-point of one DIMM per channel, with increased channeling at the memory controller and 3D DRAM die stacking within the first release of the spec to increase per-DIMM density. Without spreading the signal between multiple DIMMs and with the huge increase in bandwidth DDR4 will bring, pushing more into DDR3 and soon-to-be-less-relevant multibank techniques right now isn't really a top priority.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

necrobobsledder posted:

You need to match CPU and the motherboard needs to have a BIOS that will support the CPU features as well even though the feature is centered at the CPU.
Yeah, well, the non-K Core i7 support VT-d (and I hope that won't change in future). The chipset on my mainboard, the P67, does too. Except that none of the Asus mainboards have an BIOS option to enable it, even tho an earlier revision of the manuals listed it as an option. Ironically, ASrock has it on all relevant mainboards.

My current box virtualizes an instance of FreeBSD acting as a virtual fileserver to get ZFS under Windows, and another Linux instance as router (my current hardware router pissed me off once too much, and since my box runs 24/7, why not). All under Hyper-V. While right now, PEG passthrough is more a thing of luck, I expect it to become somewhat usable during the lifetime of my planned Haswell box. And at some point I'd like (at least try) to switch things around and run Linux with KVM or Xen as host, with Windows as guest with hardware accelerated graphics. For that, having VT-d as working option would be nice.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I am using an Asus P8B WS board for my server and it's been great, and to be honest, its only like $20-30 more than I normally spend on motherboards so it doesn't cost a ridiculous amount. You can get some good mATX Intel boards for under $200, but I needed 4 RAM slots.

I got really impatient with my build and just went to Microcenter, and they had the E3-1230 I ended up getting, and the other SKU was like $100 more but had the GPU, so I passed and just dealt with it.

PUBLIC TOILET
Jun 13, 2009

Would there be a benefit to building a Haswell Xeon system as a primary desktop machine over a Haswell i7 one? Aside from the obvious virtualization capabilities.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Shaocaholica posted:

Maybe the lanes have always been there but some of them disabled on the desktop version?

Edit: also, whats the benefit of an unlocked Xeon? Are there Xeon boards that support OC? Or will the 1155 Xeons work in a desktop 1155 board?

It's for idiots that read CADalyst magazine and demand the absolute highest synthetic benchmark computer possible for working on lovely little 2d CAD Architecture drawings.

Manos
Mar 1, 2004

Combat Pretzel posted:

Yeah, well, the non-K Core i7 support VT-d (and I hope that won't change in future). The chipset on my mainboard, the P67, does too. Except that none of the Asus mainboards have an BIOS option to enable it, even tho an earlier revision of the manuals listed it as an option. Ironically, ASrock has it on all relevant mainboards.

My current box virtualizes an instance of FreeBSD acting as a virtual fileserver to get ZFS under Windows, and another Linux instance as router (my current hardware router pissed me off once too much, and since my box runs 24/7, why not). All under Hyper-V. While right now, PEG passthrough is more a thing of luck, I expect it to become somewhat usable during the lifetime of my planned Haswell box. And at some point I'd like (at least try) to switch things around and run Linux with KVM or Xen as host, with Windows as guest with hardware accelerated graphics. For that, having VT-d as working option would be nice.

How are you passing the disks into freebsd? My understanding was that hyper-v didn't actually handle pcie pass through?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm not passing them via hardware passthrough, but via the emulated IDE adapter. As a whole disk however, instead of running a VHDX (Physical disk setting in the VM disk settings). Hyper-V drivers are not yet available. Around three weeks ago, the people responsible announced they're about ready to submit them to -CURRENT.

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD
Man, I really hope they sort out this PCI-E passthrough stuff sometime in my lifetime.
Being a Mac user would be so much more bearable if you could get a nice system running under a bare metal hypervisor in its own little world.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Your best chances are with ATI cards, from what I could gather. NVidia does a whole lot of memory fuckery, that overwhelms even the IOMMUs in the latest Intel and AMD CPUs. The biggest issue I want to see realized/fixed is the ability of the dom0 to hand off the primary graphics adapter on request, and when the domU holding that adapter shuts down, successfully take control back. The problem is, most people working on various hypervisors don't particularly care about PEG passthrough it seems. :(

tijag
Aug 6, 2002

Shimrra Jamaane posted:

Yeah Im concerned with the gaming possibilities. Well I plan to build a new desktop PC this summer anyway so I'm sure there will be a good priced Haswell comparable to the 2500k from 2 years ago.

The process Intel is building this on will be more mature, so potentially Haswell will OC better than IVB did. That's just a guess, but IVB wasn't quite as good as SNB w/respects ease of OC and how high the CPU would OC. Perhaps Haswell will get back to SNB levels of OCability.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
A bit of news today relevant to Intel's mobile market plans:

First, Intel allowed a demo video for Lenovo's IdeaPhone K900, based on an as-yet unannounced "Medfield+" Atom SoC. AnandTech link. We'll learn more about Medfield+ next week at MWC, but for now we can reliably say that it has a better GPU, at least.

https://www.youtube.com/watch?v=PDhXaqKWfWk

Second, it looks like Nvidia's keeping the pressure on Intel on all fronts. Last IDC, Intel was particularly proud of its new digital radio designs, which were almost completely fabbed on the SoC and very flexible. Nvidia aquired Icera recently, and the partnership is paying off with the newly-announced Tegra 4i, a variant of the Tegra 4 SoC with an integrated cellular baseband. It's an interesting chip - rather than Cortex A15 cores, like the tablet-grade Tegra 4, the T4i is loaded with 2.3 GHz latest-revision Cortex A9 cores, a huge clock for a mobile part. It's being positioned as a high-performance smartphone SoC. Benchmarks will tell whether older cores with high clocks will let it hold up at its end-of-year debut, but it ought to put a damper on Intel's Medfield/Medfield+/Merrifield offerings, especially given Nvidia's GPU chops.

Also a bit nuts is Nvidia's reference design for the platform, which they say a carrier can resell unsubsidized for $100 to $300.

AnandTech link on that. The revised Silvermont Atom core is due out in Q3, but the initial product leaks are talking tablet and server parts, not smartphones. It could be a slugging match.

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY
This isn't relevant to any of the Intel/AMD/GPU threads, but they're the most informed places about this stuff so I picked the one closest to the top: what's going on with the 8gb of GDDR5 system memory in the PS4? I was under the impression that 1600mhz DDR3 was more than enough to keep the CPU sated, and that GDDR5 was a drat sight more expensive. Is a couple of million sticks of the stuff enough to bring the price down that it was cheaper for Sony to grab 8gb of GDDR5 rather than 4gb of each of it and DDR3?

coffeetable fucked around with this message at 03:35 on Feb 21, 2013

Nintendo Kid
Aug 4, 2011

by Smythe

coffeetable posted:

This was the chip thread highest up the page, so while not relevant I couldn't find a more informed place: what's going on with the 8gb of GDDR5 system memory in the PS4? I was under the impression that 1600mhz DDR3 was more than enough to keep the CPU sated, and that GDDR5 was a drat sight more expensive. Is a couple of million sticks of the stuff enough to bring the price down that it was cheaper for Sony to grab 8gb of GDDR5 rather than 4gb of each of it and DDR3?

I'm pretty sure the DDR5 RAM is being shared with the GPU in the system. So they need the bandwidth already for the graphics.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Assuming that the leaks are correct, the big deal is that on an AMD APU, the GPU and CPU are fed by the same memory controller. Even in a relatively low-end 1st-gen A-series APU, platform performance scales excellently with increasing RAM bandwidth, so you want as much bandwidth as you can get. But because it's all on one memory controller, you can't pick and choose DRAM without redesigning the entire chip.

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY

Install Gentoo posted:

I'm pretty sure the DDR5 RAM is being shared with the GPU in the system. So they need the bandwidth already for the graphics.

The bit I wasn't getting is why the CPU and GPU had to use the same RAM, which

Factory Factory posted:

Assuming that the leaks are correct, the big deal is that on an AMD APU, the GPU and CPU are fed by the same memory controller. Even in a relatively low-end 1st-gen A-series APU, platform performance scales excellently with increasing RAM bandwidth, so you want as much bandwidth as you can get. But because it's all on one memory controller, you can't pick and choose DRAM without redesigning the entire chip.

now makes sense. For some reason I wasn't thinking of the PS4's chip is an update of a previous integrated design with a single controller, but as a AMD CPU and an AMD GPU basically printed side-by-side. Which is pretty daft, now I think about it.

Cheers guys :)

coffeetable fucked around with this message at 03:45 on Feb 21, 2013

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Factory Factory posted:

A bit of news today relevant to Intel's mobile market plans:

First, Intel allowed a demo video for Lenovo's IdeaPhone K900, based on an as-yet unannounced "Medfield+" Atom SoC. AnandTech link. We'll learn more about Medfield+ next week at MWC, but for now we can reliably say that it has a better GPU, at least.

https://www.youtube.com/watch?v=PDhXaqKWfWk

Second, it looks like Nvidia's keeping the pressure on Intel on all fronts. Last IDC, Intel was particularly proud of its new digital radio designs, which were almost completely fabbed on the SoC and very flexible. Nvidia aquired Icera recently, and the partnership is paying off with the newly-announced Tegra 4i, a variant of the Tegra 4 SoC with an integrated cellular baseband. It's an interesting chip - rather than Cortex A15 cores, like the tablet-grade Tegra 4, the T4i is loaded with 2.3 GHz latest-revision Cortex A9 cores, a huge clock for a mobile part. It's being positioned as a high-performance smartphone SoC. Benchmarks will tell whether older cores with high clocks will let it hold up at its end-of-year debut, but it ought to put a damper on Intel's Medfield/Medfield+/Merrifield offerings, especially given Nvidia's GPU chops.

Also a bit nuts is Nvidia's reference design for the platform, which they say a carrier can resell unsubsidized for $100 to $300.

AnandTech link on that. The revised Silvermont Atom core is due out in Q3, but the initial product leaks are talking tablet and server parts, not smartphones. It could be a slugging match.

Intel missed the boat to the point that the ARM ecosystem has grown so dominant that even having a better performance/battery life x86 SoC won't even guarantee a win. Moore's Law is now biting back at their rear end when even 5 year old PCs are too powerful to the average Youtube cat video viewer while ARM SoCs initially being dog slow years back, now we are going "holy loving poo poo" when we hear things like Qualcomm announcing their Snapdragon 800 successor being 75% faster to an already insanely fast S4 Pro. Add to the perpetually declining PC sales (plus things like Samsung outselling the entire PC market in terms of units with smartphones ALONE this year) and you are looking at a company that once thrived as a high-margin, high-volume x86 monopoly relegated to just another ARM making TSMC, stripped of its enormous R&D budget soon enough.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
n/m

Proud Christian Mom fucked around with this message at 16:04 on Feb 21, 2013

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Palladium posted:

Intel missed the boat to the point that the ARM ecosystem has grown so dominant that even having a better performance/battery life x86 SoC won't even guarantee a win. Moore's Law is now biting back at their rear end when even 5 year old PCs are too powerful to the average Youtube cat video viewer while ARM SoCs initially being dog slow years back, now we are going "holy loving poo poo" when we hear things like Qualcomm announcing their Snapdragon 800 successor being 75% faster to an already insanely fast S4 Pro. Add to the perpetually declining PC sales (plus things like Samsung outselling the entire PC market in terms of units with smartphones ALONE this year) and you are looking at a company that once thrived as a high-margin, high-volume x86 monopoly relegated to just another ARM making TSMC, stripped of its enormous R&D budget soon enough.

The thing is though, the demand is still there and will continue to be there in the Enterprise space.

Someone has to sell the hardware to power all these cloud services.

Nintendo Kid
Aug 4, 2011

by Smythe

Palladium posted:

Moore's Law is now biting back at their rear end when even 5 year old PCs are too powerful to the average Youtube cat video viewer

This isn't true. Generally people stuck on those kinds of computers actually do have trouble doing things on the modern internet, not least because they're stuck on XP or Vista and also are likely to have a bunch of crapware installed that slows down their systems.


Palladium posted:

Add to the perpetually declining PC sales

If by perpetual you mean "once in 2001, and once in 2012". Because that's what happened, yearly PC sales have only declined over the previous yer in 2001 compared to 2000 and 2012 compared to 2011.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Palladium posted:

Intel missed the boat to the point that the ARM ecosystem has grown so dominant that even having a better performance/battery life x86 SoC won't even guarantee a win.
One critical thing to remember: in a world where all code is executed with a JIT compiler on the device, it doesn't really matter what instruction set your processor is using as long as it's supported by the compiler and the product does what you need. While there is room to improve Google's JIT compiler for x86, since Intel has demonstrated that it can produce credible options improvement will continue, and it's not grossly unacceptable now. This also means the only major barrier to adoption of AMD SoCs is Android graphics drivers.

Install Gentoo posted:

This isn't true. Generally people stuck on those kinds of computers actually do have trouble doing things on the modern internet, not least because they're stuck on XP or Vista and also are likely to have a bunch of crapware installed that slows down their systems.
An Intel dual-core, DX10 videocard with currently supported drivers, and Windows Vista with Chrome/Firefox is all that's required for a great web experience. Granted if someone never updates their drivers, put XP on their machine, or has malware that would make the machine suck, but that's not because the machine is old but because it's not maintained.

Alereon fucked around with this message at 17:56 on Feb 21, 2013

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Alereon posted:

One critical thing to remember: in a world where all code is executed with a JIT compiler on the device, it doesn't really matter what instruction set your processor is using as long as it's supported by the compiler and the product does what you need.
The (rather glaring, and important) exception here is mostly driven by iOS honestly since Android has won otherwise in the mobile space. But for a great deal of cloud services, they're running on stacks other than the JVM. Ruby's various VMs are only so mature (it took how long for the gc to be optimized to where Java's was in 2001 again?) and .NET is all that remains to round out the remainder of backend stacks without counting the obscure stacks based around node.js and Erlang basically. So really, runtime application VMs are only so useful on different platforms and different ARM variants' JVMs are nowhere near as optimized and scrutinized as it is on x86 partially due to the fragmentation and breakneck pace of ARM ISA development in the past several years.

Apple may in fact be the biggest barrier to Intel. Apple has shown it's not too afraid to switch architectures historically, but the current Apple is not necessarily the Apple we've all known for decades. They may, in fact, just stay on x86 for laptops just due to the sheer amount of software that targets x86 and OS X now and that position is not anywhere near the same sort of position Apple's had in its history (m68k, PPC, etc. were all niche or part of a fragmented ecosystem, let's face it).

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Apple is looking long and hard at moving to Arm on their laptops for the power/weight benefits. The iOS-ification of OSX and channeling software through their app store is going to enable them to force 3rd party devs to re-write (assuming its needed) and compile for both Arm and x86 platforms in a laptop form factor without having to do the messy transition period for the user like what happened with Rosetta and the move to Intel. Intel knows this, and they're scared shitless at the prospect. I have a feeling if the transition does happen that the iMac and Mac Pro lines will stay on x86 for a lot longer than the mobile stuff.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

BangersInMyKnickers posted:

Apple is looking long and hard at moving to Arm on their laptops for the power/weight benefits. The iOS-ification of OSX and channeling software through their app store is going to enable them to force 3rd party devs to re-write (assuming its needed) and compile for both Arm and x86 platforms in a laptop form factor without having to do the messy transition period for the user like what happened with Rosetta and the move to Intel. Intel knows this, and they're scared shitless at the prospect. I have a feeling if the transition does happen that the iMac and Mac Pro lines will stay on x86 for a lot longer than the mobile stuff.

I dont buy this at all. There is really no benefit of apple moving OSX to ARM. Apple would rather them buy ipads if they want something ultra lightweight.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Laptops don't bring in money for Apple like what iPhone/iPad does for them. That's in part because Intel processors and x86 in general costs a lot more than what Arm would cost. And Apple is already doing the move to in-house Arm design which is going to help drive down their part cost even more. Those things mean that switching their laptops over to Arm gives them bigger margins on those products in addition to greatly improved battery life. They're in business of selling products that make money, an Arm transition would give them that, so of course they are considering it if Intel can't follow through fast enough.

Nintendo Kid
Aug 4, 2011

by Smythe
Or they could just ditch making laptops altogether and avoid the hassle of attempting a third architecture switch on their OS. Frankly that seems a lot more likely then making new ARM OS X laptops that also perform well enough in OS X for people to still want, while still coordinating things with x86-64 iMacs and Mac Minis in OS X.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I think we'll see what sort of long-term strategy Apple is going to commit to depending upon what their supposed update to the Mac Pro is. It seems a bit obvious to me that Apple isn't going to enter a thin-margin business of any sort, and that includes a number of cloud services that everyone tends to buy. While iCloud is an obvious exception here, the goal of the service is to keep people invested in buying more iDevices.

On the other hand, going back and refining existing products doesn't produce the kinds of margins that reinventing a market entirely will do like an iPhone or iPad, so maybe they'll just keep the "make it thinner and prettier than competition" strategy that's their normal procedure. Apple may be in a bit of a lull now but if they manage to solve the living room content delivery and unification problem that's one of the toughest to solve not due to technical reasons but political ones they may be able to recover all of their recently shed share prices and then some as well as cement themselves as the most valuable company in America. That'll kind of be a sad day to me, but business is business, can't really attach sentiment to it.

Apple's probably working on ARM laptops for the same reasons they kept an x86 OS X build around for years - business contingencies.

Peechka
Nov 10, 2005

BangersInMyKnickers posted:

It's for idiots that read CADalyst magazine and demand the absolute highest synthetic benchmark computer possible for working on lovely little 2d CAD Architecture drawings.

Its funny you say this because I do a lot of CAD at home, especially when the deadline is nearing and I only have 30% of the work done.(I would rather do it at home than spend more time at work) My i5 2500K and 4 gigs of ram with a ATI5850 blows through the stuff just as fast as my machine at work that costs 4x as much. Well, maybe Im overstating this a bit, but still, my productivity does not suffer at all.

And Im throwing at it Catia V5, Siemens NX 7.5 and Im definitely not doing lovely 2d drawings, but full blown 3D surface work, automotive instrument panels, in which I have to usually load assemblies in upwards of 100+ parts on my screen at the same time.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Peechka posted:

Its funny you say this because I do a lot of CAD at home, especially when the deadline is nearing and I only have 30% of the work done.(I would rather do it at home than spend more time at work) My i5 2500K and 4 gigs of ram with a ATI5850 blows through the stuff just as fast as my machine at work that costs 4x as much. Well, maybe Im overstating this a bit, but still, my productivity does not suffer at all.

And Im throwing at it Catia V5, Siemens NX 7.5 and Im definitely not doing lovely 2d drawings, but full blown 3D surface work, automotive instrument panels, in which I have to usually load assemblies in upwards of 100+ parts on my screen at the same time.

No, that is the exact fight I had to have with our engineers a few years back and the results were the same as what you see. CAD and Revit are extremely CPU bottlenecked and single threaded. There are some switches you can throw in to make the normal operations spread over multiple threads, but the software is complete garbage and the feature is "experimental" and often leads to layers being displayed out of order because of thread synchronization issues so you can't really use it. A cheaper, dual-core processor on a Precision T1500 was a whole lot faster at all your normal work than a quad-core because those were invariably clocked slower making the per-core performance worse. Even the GPU load when you're working in large 3d models is jack squat and if you watch load through GPU-z you're lucky to see 10% utilization with maybe 25% vram usage. And that's with the absolute cheapest FirePro cards we could throw in the things to at least get "certified" drivers and support.

Autodesk is just such a horrible shithole of a company and I swear they are in collusion with hardware vendors to sell their customers computers that cost 3-4x what they actually need to be spending to do the work. And god help you if you actually encounter a real bug that you need support on, because it's not getting fixed. We dealt with an issue for over a year on the 64-bit builds of their products certain systems if you rolled your cursor over an mpolygon the whole program would crash. Only on the 64-bit build, 32 wouldn't do it. And if you did a mass deletion of mpolygon objects on 64-bit it would take 20 minutes while pegging out an entire CPU core but if you did it on 32-bit it would be an instantaneous operation.

I'm glad I don't support that poo poo any more.

Peechka
Nov 10, 2005
Well I know for a fact that real 3D CAD modeling software like Siemens NX and Dassault Catia V5 both utilize as many cores as you throw at them. Also they utilize the ram as well.

Here is my current work system...

Operating System: Windows XP Professional x64 Edition (5.2, Build 3790) Service Pack 2 (3790.srv03_sp2_gdr.130106-1434)
Language: English (Regional Setting: English)
System Manufacturer: Dell Inc.
System Model: Precision WorkStation T3500
BIOS: Default System BIOS
Processor: Intel(R) Pentium(R) III Xeon processor (8 CPUs), ~3.1GHz
Memory: 12286MB RAM
Page File: 4417MB used, 10184MB available
Windows Dir: C:\WINDOWS
DirectX Version: DirectX 9.0c (4.09.0000.0904)
Card name: NVIDIA Quadro 4000
Manufacturer: NVIDIA
Chip type: Quadro 4000
DAC type: Integrated RAMDAC
Device Key: Enum\PCI\VEN_10DE&DEV_06DD&SUBSYS_078010DE&REV_A3
Display Memory: 2048.0 MB
Current Mode: 1920 x 1200 (32 bit) (60Hz)
Monitor: Plug and Play Monitor
Monitor Max Res: 1600,1200

But in really its all about stability. For example that video card which utilizes open gl is all about stability so you dont crash every 30 min losing work. They have drivers specifically tailored to applications like this so its as stable as possible. Well at least its the way it was explained to me by a cad sys admin.

So yeah you dont need this type of machine to do CAD work, but you do need it if you dont want to crash and burn and lose work. For instance my machine at home locks up if I try to move the icon windows around in NX. poo poo like that, unpredictable instability. Im not too familiar with auto cad or its variants, but im sure the stuff I work with is miles ahead of that software in 3D surface and solid design.

Oh, we also lease all of our cad stations. And were due for some new ones later this year. next up, all solid state baby and maybe 2 monitors, but I told them I would prefer one bigger one, like the new 2500x1400 or whatever 27"

Peechka fucked around with this message at 23:28 on Feb 21, 2013

Puddin
Apr 9, 2004
Leave it to Brak
I just bought some gear and was about to put all the stuff in my case.

Coming from an E8200 Dual Core to a I3570. I noticed that the secondary power plug is an 8 pin where the power supply has a 4 pin. I totally forgot to research power supplies, I have a thermaltake 600w PSU

I've seen on various other sites that some motherboards come with a sticker over it saying that you only need to use half, that the 8 pin is only needed if your overclocking. Is this still the case, or am I looking at a new PSU as well. If so, I'll finally go moodualar to reduce all the clutter.

Motherboard is a MSI Z77A-G43.

Henrik Zetterberg
Dec 7, 2007

I had the same problem with my Intel Z77 board. It wouldn't POST with the 4-pin and I had to get a new power supply.

SteviaFan420
Apr 20, 2009
So is Ivy Bridge-e going to be released soon?

Zhentar
Sep 28, 2003

Brilliant Master Genius

Puddin posted:

I have a thermaltake 600w PSU

I'm pretty sure everything Thermaltake sells is complete poo poo, so I'd recommend replacing it regardless (With a SeaSonic X650, because everything SeaSonic sells is awesome).

Puddin
Apr 9, 2004
Leave it to Brak

Henrik Zetterberg posted:

I had the same problem with my Intel Z77 board. It wouldn't POST with the 4-pin and I had to get a new power supply.

Yeah, I tried it and it boots but doesnt get to POST. New PSU it is. Which is okay as I will probably use the old parts as an XBMC machine for the living room.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Puddin posted:

Yeah, I tried it and it boots but doesnt get to POST. New PSU it is. Which is okay as I will probably use the old parts as an XBMC machine for the living room.

I'm p.sure you can just throw one of these adapters at it so long as the power supply is putting enough current on +12V, which it probably is since the power requirements are going down. The power supply I am using now comes with a P4 connector and a P4 to P8 adapter just like that which I ended up using.

http://www.xpcgear.com/cvt48.html

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

Zhentar posted:

I'm pretty sure everything Thermaltake sells is complete poo poo, so I'd recommend replacing it regardless (With a SeaSonic X650, because everything SeaSonic sells is awesome).
The Thermaltake Toughpower line is actually pretty good, however if it's a 600W model it's probably ancient at this point so it may need replacing.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

BangersInMyKnickers posted:

Apple is looking long and hard at moving to Arm on their laptops for the power/weight benefits. The iOS-ification of OSX and channeling software through their app store is going to enable them to force 3rd party devs to re-write (assuming its needed) and compile for both Arm and x86 platforms in a laptop form factor without having to do the messy transition period for the user like what happened with Rosetta and the move to Intel. Intel knows this, and they're scared shitless at the prospect. I have a feeling if the transition does happen that the iMac and Mac Pro lines will stay on x86 for a lot longer than the mobile stuff.
I'd be honestly really surprised if this happened. Apple's been making tremendous inroads with many corporations -- I'd say it's probably been their biggest Mac growth segment over the last five years -- and it's mostly because Parallels and VMware Fusion allow people to run important business applications that there are no supportable Mac alternatives to. This is not solely in the hands of IT departments, but also the BYOD generation.

There were major business wins for Apple in the Intel switch besides not being tied to a dying architecture with major problems in the areas of power, heat, and performance. Especially with what's going on with Haswell in terms of power consumption, I don't see a single compelling reason for them to switch at this point in time.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Apple having an enterprise desktop strategy makes only so much sense given they abhor dealing with the usual BS of enterprise software with armies of consultants and architects. While they do appreciate the business, they're just not looking like they're going to suck enterprise dick like IBM, HP, Unisys, BMC, CA, ad infinitum. They're not exactly rolling out iOS Management Solution Suite either for their already-successful entry into the enterprise supply chain. I really don't think that such developments would attract the sort of employees Apple wants either.

BYOD is just not going to go the same route as unregulated companies like Intel over the next 15 years. I'll suck a mean dick and swallow with a smile if hardasses like all the financials and healthcare verticals let you just buy a random laptop and put it on the network no problemo from their IT.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

necrobobsledder posted:

Apple having an enterprise desktop strategy makes only so much sense given they abhor dealing with the usual BS of enterprise software with armies of consultants and architects. While they do appreciate the business, they're just not looking like they're going to suck enterprise dick like IBM, HP, Unisys, BMC, CA, ad infinitum. They're not exactly rolling out iOS Management Solution Suite either for their already-successful entry into the enterprise supply chain. I really don't think that such developments would attract the sort of employees Apple wants either.

BYOD is just not going to go the same route as unregulated companies like Intel over the next 15 years. I'll suck a mean dick and swallow with a smile if hardasses like all the financials and healthcare verticals let you just buy a random laptop and put it on the network no problemo from their IT.
I think this is the same thing that everyone was saying back around 2007 when the consensus was that they would never support Microsoft Exchange on iPhone.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply