|
movax posted:Last de-rail: yeah, the storage market is currently where the "holy poo poo, night and day" difference is at, thanks to SSD. Don't have to worry about mechanical failure, you get bitchin' fast speeds, etc. Systems shipping with 2gb of ram aren't going to be slowed in any real way by 6% of system ram being allocated to desktop composition. Once you're up to 4gb it doesn't matter in the slightest. You can have all the dedicated GPU ram in the world you want, but this thing isn't going to be powerful enough to drive anything with it so you might as well just use the system memory that is already there and if you want anything more intensive than Aero or Flash HW acceleration then get real dedicated graphics.
|
# ¿ Sep 15, 2010 17:02 |
|
|
# ¿ Apr 26, 2024 18:49 |
|
movax posted:Or maybe that was just in the silly Sandra synthetic memory benchmarks, which no one really cared about except in overclocking e-peen contests. That. We're talking about systems that just need to do the job "well enough". And since the memory controller is now moved in to the CPU and DDR3 is here, I seriously doubt memory bandwidth is going to be saturated in most configurations.
|
# ¿ Sep 15, 2010 17:11 |
|
Misogynist posted:It seems obvious to me that this is targeted at netbooks and similar form-factors, and I don't know why people are putting so much effort into dissecting how horrible it is at replacing their GPGPUs and video encoders. What do you mean it won't do something for a market segment they were never targeting?? I demand satisfaction!!
|
# ¿ Sep 15, 2010 17:24 |
|
Assuming Dell starts offering on-board video with dual-display outputs on the next gen of Optiplexes, this will probably get us to drop add-in video cards for everything except Autodesk product users.
|
# ¿ Sep 15, 2010 17:25 |
|
Doc Block posted:Yes, I realize that now. I was just referring to your own quote in the OP, where it says no programmability, and thought that meant no sharer support. They're "gimping" their GPU to fulfill power/speed/size demands of a market segment that in no way gives a poo poo about their graphics hardware being fully programmable.
|
# ¿ Sep 15, 2010 17:37 |
|
Doc Block posted:Unfortunately there are a lot of businesses with no plans whatsoever to upgrade beyond XP in the immediate future. I'm guessing at some point a proper study about TCO for XP vs Win7 is going to come out factoring in all the man hours sunk in having tier 1 chase down the stupid viruses that would have easily been stopped by just having Win7 in place, it'll get hyped through a bunch of management tech rags, and then CTOs will start screaming about migrating. The only business requirement that is actually forcing people in to upgrading at this point is graphics design people and engineers/3d modeling. PC LOAD LETTER posted:This is certainly true right now, but in a few years or so maybe (maaaybe) not. It'd be nice to have a GPU in most every PC that would support GPGPU apps, it'd certainly help foster the development of GPGPU apps. Possibly, but there is a huge sector of their business customers that simply aren't going to see any use out of this and if dropping it means cheaper chips, they'll go that route because their workload is serial in nature and 2-4 cores is going to be enough to handle small spikes in load and background tasks while being responsive. Where are you going to shoehorn GPGPU support in to the Office suite besides Excel jockeys? Especially considering Microsoft is already trying to abstract heavy lifting there to a server cluster behind the workstation.
|
# ¿ Sep 16, 2010 16:28 |
|
bull3964 posted:Not necessarily. The same pressure that kept older versions of windows around (browser support) will also drive the older ones to go away too for general business users. We are so far away from the point where IE8 won't be supported and therefor need to get off XP to update it that it isn't even worth considering. Other factors will drive it before that.
|
# ¿ Sep 16, 2010 17:54 |
|
Nebulis01 posted:They do this now with a PCI-E add-on card that adds a DVI port and can be used in addition to the on board VGA connector. We've gotten all of our Optiplex 780s configured this way. I understand that. It just doesn't have enough oomph to do Aero composition well which is why we've been going with the 256MB addin to handle it.
|
# ¿ Sep 16, 2010 18:22 |
|
bull3964 posted:It really depends on how quickly GPU acceleration of the browser becomes a requirement. The Amazon shelf demo is horribly clunky without GPU acceleration (around 6fps vs 60fps with acceleration.) If these types of rich web apps become common quickly, the push will be there. Why the hell is a business going to care if Amazon.com is running slowly?
|
# ¿ Sep 16, 2010 20:18 |
|
Nam Taf posted:The day this starts supporting GPGPU is the day I'll blow my load. I thought GPUs could access system ram as-needed since the AGP days?
|
# ¿ Sep 19, 2010 15:23 |
|
Cryolite posted:How much of a decrease in boot times do you think we'll see from UEFI vs. BIOS? UEFI, I was told anyway, also allows for the operating system to do a warm reboot without UEFI re-executing. So there is that.
|
# ¿ Nov 4, 2010 14:07 |
|
~Coxy posted:I hope this doesn't come across as a loaded question, because I'm actually curious. It's a much faster modular system running out of flash memory. Some of our servers use it now and initialization time is a fraction of what it was previously. I believe Macs are already using EFI, which is a good portion of why they can get to OS load so quickly.
|
# ¿ Nov 5, 2010 14:05 |
|
movax posted:By legacy, I mean more offering up BIOS services to the OS. If you remember DOS and its ilk, they used direct BIOS interrupts for nearly everything. Like EnergizerFellow mentioned, even 32-bit Win7 still needs to use int 19h to actually *boot*. The newest Dell PE servers have a BIOS/firmware update utility built right in to the motherboard and I cannot begin to describe how completely awesome it is. Set the IP, pulls down updates from Dell's FTP repository, and goes. No muss no fuss.
|
# ¿ Nov 11, 2010 15:28 |
|
movax posted:There's nothing to worry about, EFI-based BIOS have been in Macs, and vendors like AMI have very mature EFI implementations; there were full EFI-based BIOSes available for Calpella & it's accompanying generation, if any vendors felt so inclined. The E6600 has had a good service life though. Mine has been dutifully chugging away for 4 years now, it's really not surprising that my next upgrade is going to involve a total platform overhaul. In hindsight it really amazes me how much hardware requirements for applications stagnated over that period of time. Between purging of old hardware and greater adoption of .NET and the overhead it costs, they seem to be going up in earnest for the first time in years.
|
# ¿ Nov 14, 2010 22:47 |
|
spasticColon posted:Is a special motherboard going to be required to utilize the K edition processors? A mid-range board for around $100 would be nice and I don't need SLI, Crossfire, RAID, or USB 3.0 just the ability to utilize the unlocked multiplier. And two accessible PCI slots for my wireless card and sound card which probably means a full ATX board since my video card has a two-slot cooler. One more question, will that Cooler Master Hyper 212P work with the new socket 1155? Likely not, but you'll want one that lets you have granular control of voltages and the FSB along with ratio splits and all that other fun stuff. Just wait until they're out for a bit and see what people recommend because in addition to those features some boards will be better suited for going above spec ratings.
|
# ¿ Dec 10, 2010 23:38 |
|
Hyperthreading isn't going to hinder performance, it just jams additional instructions in to holes in the execution pipeline when it finds them. Either one of two things will happen: Your game is single-threaded (or dual, triple threaded) and you have a core or more free to run background processes and the windows resource scheduler runs them on those because it sees cycles available there and hyperthreading doesn't really enter in to it, or you're doing something that consumes all your CPU cycles and your background processes are going to be in contention with your game/application and splitting resources according to the process priority while hyperthreading jams instructions in to those pipeline holes and gets more work done per clock than would have happened otherwise.
|
# ¿ Dec 14, 2010 15:20 |
|
Lum posted:Out of interest, are there any games or killer apps due out that actually need this? There's a solid 40-50% performance jump between similarly clocked first-gen C2 and i7 processors. I'll agree that system requirements are hitting a plateau (and I think this is more to do with aging console hardware more than anything), but until you start seeing more software that needs the hardware I don't see terribly compelling reason to upgrade.
|
# ¿ Dec 14, 2010 17:39 |
|
Lum posted:My main interest in EFI is to improve boot time. Sleep mode makes that pretty low on my consideration list these days. As for legacy stuff bogging it down, this will mostly depend on how it is implemented. If it is like what Dell did on their servers, the core will be EFI with BIOS emulation layers for legacy components but those can be turned off if not needed. Who knows how other manufacturers will handle it, though I suspect a lot will do a bad job initially.
|
# ¿ Dec 14, 2010 17:48 |
|
Lum posted:Am I right in thinking that you need EFI drivers for add-in cards to avoid the BIOS emulation layer, and therefore I'll need to hold off on a graphics card update until they start supporting EFI too? No, it just uses some standard VGA 640x480 driver. Things like add-in disk controllers or network cards with PXE support are going to be the problem and forced through a legacy layer.
|
# ¿ Dec 14, 2010 20:04 |
|
Odds are that yes, even a terrible EFI implementation will not be any worse than a standard BIOS one (unless they did a horrific job and there are stability/conflict issues). One of the things I really enjoy about EFI is that because there is so much more desktop area and storage to work with, not to mention mouse support, so you can get detailed feature descriptions on settings without having to dig out the manual or whatnot. Plus the potential for an embedded firmware flash utility that can use ftp to pull down the updated bin files, since there is a TCP/IP stack.
|
# ¿ Dec 14, 2010 20:24 |
|
Idle quad-core i5 systems draw less power than the older Core2 systems that are replacing them in our office. They might draw more when under heavy load, but it is really going to depend on how you are using them and with what software.
|
# ¿ Dec 14, 2010 23:35 |
|
movax posted:It's definitely a hard launch, but rest assured, Newegg and co. will gouge for a few weeks before prices settle. I'd be a little more worried about mobo availability, once the first few AnandTech/similar reviews come out recommending a certain board(s) at a certain pricepoint, they sell out reaaaaly fast. All the benchmarks I have seen indicate that dual vs triple channel isn't going to make a difference because your cpu can't use all that memory bandwidth. Once we start seeing 6 and 8 core i7's then it will probably matter, but not at the moment. Maybe you'll start hitting limits of a dual-channel config if you start really overclocking the hell out of it, but then you can just drop in that 3rd dimm and get ever more bandwidth. I seriously doubt it will provide any performance benefit to screw around with memory FSB until we start getting in to 6 and 8 core home systems.
|
# ¿ Dec 16, 2010 17:08 |
|
What's the story with that low-overhead AMD AA tech (the one that messes with text fidelity)? Are there specific cards it works with or is it just a driver-based feature? I usually end up with nvidia cards but the 6850 is looking like a good deal and that feature is enticing.
|
# ¿ Mar 27, 2011 18:29 |
|
Alereon posted:Morphological AA (MLAA) works on the Radeon HD 5000-series and newer cards with current drivers. It's basically a blur filter that specifically works on sharp edges. The Radeon HD 6900-series also added Enhanced Quality AA, which takes additional coverage samples to improve color blending quality. Also, I'm not sure, but I think they may have fixed the AA messing with text by putting in application profiles that disables AA for apps like web browsers. Thanks. And I totally threw this in the wrong thread tab, so thanks for ignoring that as well.
|
# ¿ Mar 28, 2011 02:44 |
|
Do you really need an extra 600mhz? The more voltage you crank through this thing, the shorter its lifespan will be and at some point you have to ask yourself if it is worth mucking around with any more.
|
# ¿ Mar 29, 2011 17:45 |
|
MaxxBot posted:I understand your concern but 1.3V is by no means a high voltage for a 2500K/2600k CPU to run at. Mine is at 1.38V and it barely breaks 60C at full load. I understand there is more to this than temperature but people have even asked motherboard manufacturers and been told that voltages up to 1.42V are safe. People have stress tested these things for days with voltages over 1.5V with no issues. I'm not talking about life issues in days or months, I'm talking about the traces frying out after 2 years instead of 10. That's the risk you take by over-volting it and there simply isn't enough time to do load testing to recreate the potential loss in service life. If that isn't really a concern for you, then by all means go crazy. I'm trying to stress the point that just because it is stable, doesn't mean you aren't damaging things.
|
# ¿ Apr 2, 2011 00:06 |
|
What kind of temperatures should I be seeing on a 2500k at stock speeds with the Intel cooler? It idles around 30C which seems well enough, but if I throw it under Prime95 for some load testing it shoots up to 60C+ and the fan RPM never budges from 2000. On my old E6600 the fan speed would start ramping up around 50C which seems right, 60C seems high to me.
|
# ¿ Apr 7, 2011 18:03 |
|
Is there a known issue with with P67 boards seeing the display blank out, occasionally with repeating vertical bars? I'm 90% sure that it is just a bad video card but I'd like to know I didn't miss anything before I RMA.
|
# ¿ Apr 8, 2011 16:15 |
|
movax posted:Well, uh, the P67's shouldn't have integrated graphics, so I assume you're talking about your videocard? Cable seated properly? What's your BCLK? PCIe spread spectrum on or off? Spread spectrum is off and I've reseated at this point. No idea about the BCLK, but I would assume it's still at factory settings. Like I said, I'm 90% sure it's just the video card being bad but the way the keyboard locks up when it happens makes me a little worried about the PCIe bus/chipset as well. I don't have enough hardware to do a proper bench test which is making this more difficult that I expected.
|
# ¿ Apr 8, 2011 19:54 |
|
movax posted:Oops yeah, I and read your original post wrong, of course it's the videocard. Did you buy this videocard along with mobo (i.e. did it work properly in another board before this one?) All new everything (except the power supply, which has been a very reliable 500W OCZ thing). But I think I found a guy to lend me an 8800GT for bench testing so that should settle things.
|
# ¿ Apr 8, 2011 21:04 |
|
Is anyone else seeing graphical corruption with old Java apps when you use the embedded graphics? I'm sure it is an Intel driver issue, but apparently I am the first one to report it to Dell and subsequently Intel and it is an absolute pain in the rear end to be the first to get them to acknowledge this kind of thing.
|
# ¿ Aug 2, 2011 22:34 |
|
Factory Factory posted:Got an example app we can try? In my case it is the Oracle Jinitiator 1.3.1.30 so unless you have a Oracle middleware server running like I do then it won't do you much good. But if anyone has some crappy software that runs on a 1.3 JRE they could test I would appreciate it. The behavior is odd. Basically the windows won't refresh unless you move them. But if you open up the Intel graphics properties dialog and leave it in the background, then everything starts refreshing properly.
|
# ¿ Aug 2, 2011 22:44 |
|
Combat Pretzel posted:I wish SSD caching would be a function of the OS. Put it in the new Storage Spaces stack, or whatever. In ZFS it was called L2ARC and worked drat fine. I think the all-in-one approach Seagate is taking with their Momentus XT drives is the best long-term approach, but at the moment it is primarily doing read caching because they only have 16GB of SSD to work with (and a chunk of that is reserved for OS boot acceleration). If they start getting 32-64+ gb of SSD on there then instead of read acceleration it could become a two-stage storage mechanism where write data is also staged there as a NV cache similar to what high-end raid controllers and NAS's are doing.
|
# ¿ Jan 24, 2013 18:16 |
|
I've had nothing but trouble when trying to configure nic teaming and some other features on servers with Broadcom nics while the Intel ones has been easy and reliable. In a home environment, stuff like that and ToE aren't going to matter a whole lot but if a couple extra bucks means I'm not paying broadcom for their lovely chips then I'm all for it.
|
# ¿ Jan 25, 2013 19:57 |
|
THEY CALL HIM BOSS posted:Is there a comparison out yet that shows the differences between the Haswell consumer processors and the Haswell Xeons? I know it's far away but I wasn't sure if they released any information on that. Traditionally it is just adding support for multi-processor configurations, more cache, and larger memory maximums. I wouldn't expect a crazy departure from the consumer-grade stuff.
|
# ¿ Feb 9, 2013 17:00 |
|
Shaocaholica posted:Maybe the lanes have always been there but some of them disabled on the desktop version? It's for idiots that read CADalyst magazine and demand the absolute highest synthetic benchmark computer possible for working on lovely little 2d CAD Architecture drawings.
|
# ¿ Feb 16, 2013 00:55 |
|
Apple is looking long and hard at moving to Arm on their laptops for the power/weight benefits. The iOS-ification of OSX and channeling software through their app store is going to enable them to force 3rd party devs to re-write (assuming its needed) and compile for both Arm and x86 platforms in a laptop form factor without having to do the messy transition period for the user like what happened with Rosetta and the move to Intel. Intel knows this, and they're scared shitless at the prospect. I have a feeling if the transition does happen that the iMac and Mac Pro lines will stay on x86 for a lot longer than the mobile stuff.
|
# ¿ Feb 21, 2013 19:57 |
|
Laptops don't bring in money for Apple like what iPhone/iPad does for them. That's in part because Intel processors and x86 in general costs a lot more than what Arm would cost. And Apple is already doing the move to in-house Arm design which is going to help drive down their part cost even more. Those things mean that switching their laptops over to Arm gives them bigger margins on those products in addition to greatly improved battery life. They're in business of selling products that make money, an Arm transition would give them that, so of course they are considering it if Intel can't follow through fast enough.
|
# ¿ Feb 21, 2013 20:30 |
|
Peechka posted:Its funny you say this because I do a lot of CAD at home, especially when the deadline is nearing and I only have 30% of the work done.(I would rather do it at home than spend more time at work) My i5 2500K and 4 gigs of ram with a ATI5850 blows through the stuff just as fast as my machine at work that costs 4x as much. Well, maybe Im overstating this a bit, but still, my productivity does not suffer at all. No, that is the exact fight I had to have with our engineers a few years back and the results were the same as what you see. CAD and Revit are extremely CPU bottlenecked and single threaded. There are some switches you can throw in to make the normal operations spread over multiple threads, but the software is complete garbage and the feature is "experimental" and often leads to layers being displayed out of order because of thread synchronization issues so you can't really use it. A cheaper, dual-core processor on a Precision T1500 was a whole lot faster at all your normal work than a quad-core because those were invariably clocked slower making the per-core performance worse. Even the GPU load when you're working in large 3d models is jack squat and if you watch load through GPU-z you're lucky to see 10% utilization with maybe 25% vram usage. And that's with the absolute cheapest FirePro cards we could throw in the things to at least get "certified" drivers and support. Autodesk is just such a horrible shithole of a company and I swear they are in collusion with hardware vendors to sell their customers computers that cost 3-4x what they actually need to be spending to do the work. And god help you if you actually encounter a real bug that you need support on, because it's not getting fixed. We dealt with an issue for over a year on the 64-bit builds of their products certain systems if you rolled your cursor over an mpolygon the whole program would crash. Only on the 64-bit build, 32 wouldn't do it. And if you did a mass deletion of mpolygon objects on 64-bit it would take 20 minutes while pegging out an entire CPU core but if you did it on 32-bit it would be an instantaneous operation. I'm glad I don't support that poo poo any more.
|
# ¿ Feb 21, 2013 22:36 |
|
|
# ¿ Apr 26, 2024 18:49 |
|
Puddin posted:Yeah, I tried it and it boots but doesnt get to POST. New PSU it is. Which is okay as I will probably use the old parts as an XBMC machine for the living room. I'm p.sure you can just throw one of these adapters at it so long as the power supply is putting enough current on +12V, which it probably is since the power requirements are going down. The power supply I am using now comes with a P4 connector and a P4 to P8 adapter just like that which I ended up using. http://www.xpcgear.com/cvt48.html
|
# ¿ Feb 22, 2013 23:01 |