Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Update 2018: Patch your poo poo if you can! Meltdown and Spectre are here...

Update 08/31/2014: Haswell-E is here, pairing up to 8 Haswell CPU cores with the X99 chipset and quad-channel DDR4 memory. This platform is for people who really need 6+ CPU cores, >32GB of RAM, double the normal memory bandwidth, or 32+ PCIe lanes. Most people would be better served with a 4790K, especially for gaming.

Update 05/02/2014: The first motherboards with Intel Z97 chipsets are now available early. They continue to use the LGA-1150 socket and support existing Haswell CPUs as well as the upcoming Haswell Refresh/Devil's Canyon models and next year's 14nm Broadwell CPUs. Z97 boards typically sport SATA-Express and many include NGFF/M.2 slots for SSDs. We will know more once chipset reviews launch next week.

Update 06/01/2013: Haswell reviews are here, here is selected coverage from Anandtech:
Anandtech: The Haswell Review: Core i7 4770K and i5 4560K Tested
Anandtech: The Intel Iris Pro 5200 Graphics Review: Core i7 4950HQ Tested
Anandtech: Intel's Haswall Quad-core Desktop Lineup
Anandtech: Intel's Haswell Quad-core Mobile Lineup
Anandtech: Intel's Haswell: An HTPC Perspective

Update 10/05/2012: Anandtech has posted their Haswell architecture analysis.

Update 04/23/2012: Anandtech has posted their Ivy Bridge Core i7 3770K review.

Update 03/06/2012: Anandtech has posted their Ivy Bridge Core i7 3770K preview, also confirming the Ivy Bride launch has been pushed back until April 29th. 7-series motherboards will still be launching April 8th as scheduled.

Update 01/31/2011: A defect has been found in the SATA controller of the Intel 6-series chipset that will require a recall and replacement of Sandy Bridge motherboards. From the Intel statement:

Intel posted:

SANTA CLARA, Calif.--(BUSINESS WIRE)-- As part of ongoing quality assurance, Intel Corporation has discovered a design issue in a recently released support chip, the Intel® 6 Series, code-named Cougar Point, and has implemented a silicon fix. In some cases, the Serial-ATA (SATA) ports within the chipsets may degrade over time, potentially impacting the performance or functionality of SATA-linked devices such as hard disk drives and DVD-drives. The chipset is utilized in PCs with Intel’s latest Second Generation Intel Core processors, code-named Sandy Bridge. Intel has stopped shipment of the affected support chip from its factories. Intel has corrected the design issue, and has begun manufacturing a new version of the support chip which will resolve the issue. The Sandy Bridge microprocessor is unaffected and no other products are affected by this issue.

The company expects to begin delivering the updated version of the chipset to customers in late February and expects full volume recovery in April. Intel stands behind its products and is committed to product quality. For computer makers and other Intel customers that have bought potentially affected chipsets or systems, Intel will work with its OEM partners to accept the return of the affected chipsets, and plans to support modifications or replacements needed on motherboards or systems. The systems with the affected support chips have only been shipping since January 9th and the company believes that relatively few consumers are impacted by this issue. The only systems sold to an end customer potentially impacted are Second Generation Core i5 and Core i7 quad core based systems. Intel believes that consumers can continue to use their systems with confidence, while working with their computer manufacturer for a permanent solution. For further information consumers should contact Intel at https://www.intel.com on the support page or contact their OEM manufacturer.
Here's an article from Anandtech with more details.

Update 01/05/2011: The NDAs expired early. Here are some reviews:
Anandtech: The Sandy Bridge Review: Intel Core i7-2600K, i5-2500K and Core i3-2100 Tested
Anandtech: Intel’s Sandy Bridge: Upheaval in the Mobile Landscape
HardOCP: Intel Sandy Bridge 2600K & 2500K Processors Review

Here are the original preview articles:
Anandtech: Intel's Sandy Bridge Architecture Exposed
Anandtech: Performance Preview of Intel's Sandy Bridge

On January 9th, Intel will launch their new Sandy Bridge processors, and with them usher in some of the most significant changes to the computing landscape we've seen in years.

The CPU

Oddly enough for a new Intel product, the CPU cores are perhaps the least interesting aspect of Sandy Bridge. The upshot is that Sandy Bridge is about 10% faster clock-for-clock than current Lynnfield Core i5/i7 CPUs using current code, and uses about 10% less power. Sandy Bridge combines the SSE4.2 instructions from the Nehalem Core i7s, the AES-NI instructions from the Gulftown Core i7 hex-cores that drastically improve encryption and decryption performance, along with new Advanced Vector Extensions (AVX) that improve floating point performance. Intel has also introduced a new "Level 0 cache" that caches decoded instructions, improving power-efficiency by saving the CPU from having to duplicate work. Intel has also connected all parts of the CPU together using a high-speed ring bus, which is interesting because it physically sits on top of the L3 cache, like a highway overpass, allowing high-performance connectivity without taking up any room. Turbo Boost has also been enhanced in Sandy Bridge. Previously, Turbo Boost allowed the CPU to overclock itself if some cores were idle, or if all cores were in use but the processor hadn't reached its Thermal Design Power (TDP) yet. Turbo Boost is now allowed to exceed the rated TDP for up to 25 seconds at a time, in order to more quickly finish tasks and return to an idle condition. The Anandtech Sandy Bridge architecture article goes into more detail about these and other changes.

Graphics
Update 01/05/2011: It turns out that only the K-series overclocking edition desktop processors will feature the Intel HD Graphics 3000 GPU with 12 Execution Units. All other desktop processors will have Intel HD Graphics 2000 with 6 EUs. All laptop processors are Intel HD Graphics 3000.

For the first time ever, we have onboard graphics that that are faster than a dedicated graphics card. Sandy Bridge has a DirectX10 graphics core integrated into the processor that handily beats either a Radeon HD 5450 or Geforce G210, spelling the end of the line for low-end GPUs that aren't capable of decent gaming performance. Unlike the Clarkdale processors which had a separate graphics chip sitting next to the CPU on the same package, on Sandy Bridge the graphics core is integrated into the processor die, much like another CPU core. The graphics core also supports Turbo Boost, allowing it to be overclocked to improve gaming performance if the processor cores aren't fully using their power budgets. The graphics core runs at 850Mhz, and can turbo up to 1100-1350Mhz depending on model. There's some uncertainty on the graphics performance for desktop Sandy Bridge chips, as depending on the model, they come with a graphics core that either has 6 or 12 Execution Units, and it hasn't been confirmed how many units the chip Anandtech tested had. Laptop chips all have 12 EUs.

Media Engine
Update 01/05/2011: The media engine has been branded as Intel Quick Sync technology. Unfortunately, using this technology requires that the processor's on-die graphics be enabled and in-use. You can't use it if you have your own videocard. On the plus side, performance is very impressive, and video quality is almost as good as CPU-only encoding (from a crappy encoder, it's no x264). Quality is MUCH improved over GPU-based encoders.

One very disappointing aspect of the graphics integrated into Sandy Bridge is that Intel, being a CPU company, has chosen not to support any general purpose GPU acceleration features, such as OpenCL or DirectCompute. This isn't just a lack of driver or SDK support, the GPU is almost entirely fixed-function, with a bare minimum of programmable functionality. As an alternative, Intel has provided a media processor that performs pure hardware decoding of high definition MPEG2/VC-1 (WMV9)/AVC (H.264) video. In addition, it offers pure hardware encoding of AVC (H.264) video. The idea is that you'll be able to transcode video faster, and with lower power usage, on the CPU using the media engine than you would using a program that supports nVidia's CUDA or OpenCL on a dedicated GPU. This does require that you use a program that supports the Sandy Bridge media processor, but it's expected that support will be integrated into programs like Cyberlink's Media Show Espresso at launch, allowing performance comparisons with CUDA. We also don't know what kind of video quality it will produce, but probably likely that it won't be quite as good as dedicated software encoders like x264.

A New Kind of Overclocking
Update 01/05/2011: Overclocking requires that you have a motherboard based on the Intel P67 chipset. Motherboards that support the on-die graphics (using the H67 chipset) cannot support overclocking. In Q2 Intel will release the Z68 chipset that supports the processor graphics, overclocking, and an unannounced feature called SSD Caching. In addition to the K-series processors that are fully unlocked, the other CPUs that support Turbo mode can be overclocked to 400Mhz beyond their highest supported Turbo speed. Unfortunately if you buy a K-series CPU you have to give up on Intel VT-d virtualization technology (you still get VT-x) and Intel Trusted Execution Technology support, but essentially no one uses these features on the desktop so it's not really a big deal, just annoying. Overclocking your K-series CPU to 4.4Ghz+ on air seems to be attainable for nearly everyone.

A CPU's clock speed is its Front Side Bus (FSB) speed multiplied by a value conveniently called the clock multiplier. On Intel CPUs since the Pentium II, this multiplier has been fixed by Intel at the factory (except on the most expensive Extreme Edition processors). As a result, overclocking has been accomplished by raising the FSB speed, which is generated by a clock generator chip on the motherboard. On the Sandy Bridge processors, this clock generator is now part of the chipset, and its frequency is fixed by Intel and cannot be changed by more than about 5%. This means that overclocking as we know it is now impossible. However, realizing the value of overclocking to enthusiasts, Intel will now be releasing some processor models with an unlocked multiplier. These models will have a letter "K" at the end of the model number, and will come at about a 10% price premium. Additionally, current indications are that those processors which support Turbo Mode (the Core i5 and i7 series) can be forced to run at any of their available turbo bins. While not an end to overclocking, this does mean the days of buying the cheapest processor model and overclocking the heck out of it are now behind us.

Other Platform Features
Update 01/05/2011: The Intel Z68 chipset coming in Q2 supports the processor graphics, overclocking, and an unannounced feature called SSD Caching. I don't know if it supports PCI-Express port bifurcation (splitting the PCI-E x16 into two x8s), but I would expect it does.

Sandy Bridge will use a new socket, LGA-1155, that is not compatible at all with existing motherboards and CPUs. The new motherboards feature Intel 6-series chipsets that support the new SATA600 interface on two of the 6 SATA ports (the other four are SATA300). Like the current Lynnfield processors, the CPU provides a PCI-Express 2.0 x16 connection that can be divided into two x8 connections for Crossfire/SLI, though like with Clarkdale motherboards based on the H6x-series chipsets that support the onboard graphics cannot divide this connection, so are limited to one graphics card. USB3.0 will not be supported by the chipsets, but Intel is considering making a third-party USB3.0 controller part of the reference motherboard design. Intel has upgraded the PCI-E 1.0 x4 "DMI" bus that connects the chipset to the CPU to PCI-E 2.0, doubling interface bandwidth to 2.0GB/sec in each direction. This should improve performance for SATA600 and USB3.0 controller chips on the motherboard, without requiring them to do bizarre things like cannibalize PCI-E 2.0 lanes from the graphics card to get acceptable performance.

AMD's Response

Anandtech: AMD Discloses Bulldozer and Bobcat Architectures at Hot Chips 2010

The Bulldozer architecture is AMD's answer to Intel's Sandy Bridge. Bulldozer is based on "modules" of two integer cores with shared floating point and other hardware. Intel's Hyperthreading Technology allows two threads to share a core for a very small cost in die area, and a performance improvement of about 10% on average. AMD's Bulldozer architecture actually allows two threads to execute simultaneously, for about a 50% increase in die area but with a potential 100% performance increase in the best-case scenario. The upshot is that while Sandy Bridge will be a quad-core processor that appears to Windows as 8 cores and performs like 4.4 cores, Bulldozer will be a quad-module processor that appears to Windows as 8 cores, takes up the space of 6 cores, and performs somewhere between 4 and 8 cores depending on the workload. While it's still too early to draw meaningful conclusions, it seems likely that Intel's solution will be faster and more power-efficient overall, but that AMD's will at least narrow the current performance gap, possibly take the lead in some heavily-threaded workloads, and will target a good price/performance ratio. Bulldozer will also feature an on-die AMD GPU that will likely be significantly faster and more capable than Sandy Bridge's graphics, rumors are that it will be a Radeon HD 5570-class GPU with 400 Stream Processors.

Some Useful Replies and Info:

movax found this sweet table comparing Asus's Sandy Brige boards. Here's his post with more info.
movax also made this useful post about what VT-d, VT-x, and TXT are.
PC LOAD LETTER found this newb-friendly Sandy Bridge overclocking guide.

Somebody fucked around with this message at 03:35 on Jan 9, 2018

Adbot
ADBOT LOVES YOU

mew force shoelace
Dec 13, 2009

by Ozmaugh

Alereon posted:

Additionally, current indications are that those processors which support Turbo Mode (the Core i5 and i7 series) can be forced to run at any of their available turbo bins.

The return of the turbo button

eames
May 9, 2009

Alereon posted:

A New Kind of Overclocking

The change to overclocking is interesting. I assume that most retail processors will be sold as "K" models since 10% is a relatively small price to pay for the extra performance a knowledgable person can get out of it.

When I first read this paragraph I thought Intel would limit unlocked multipliers and overclocking in general to the completely unreasonably priced Extreme Editions. :argh:
Phew.

Raptop
Sep 3, 2004
not queer for western digital

Alereon posted:

Intel has also connected all parts of the CPU together using a high-speed ring bus, which is interesting because it physically sits on top of the L3 cache, like a highway overpass, allowing high-performance connectivity without taking up any room.

I hope that isn't a quote from IDF, because it definately isn't true. SandyBridge cache slices sit opposite cpu cores on the ring; i.e. the core and the cache slice straddle the ring interface. There's additional wires to let a core talk to this nearest slice directly without having to emit a ring transaction. The thing I always found interesting was that the 3d graphics unit could also use the L3 cache.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Raptop posted:

I hope that isn't a quote from IDF, because it definately isn't true. SandyBridge cache slices sit opposite cpu cores on the ring; i.e. the core and the cache slice straddle the ring interface. There's additional wires to let a core talk to this nearest slice directly without having to emit a ring transaction. The thing I always found interesting was that the 3d graphics unit could also use the L3 cache.
Here's the slide from IDF discussing it. I used the highway overpass analogy not to indicate that it wasn't connected to the L3 cache, but to describe how it is placed on a layer over the top of the cache so as not to consume die area.

kapinga
Oct 12, 2005

I am not a number

eames posted:

The change to overclocking is interesting. I assume that most retail processors will be sold as "K" models since 10% is a relatively small price to pay for the extra performance a knowledgable person can get out of it.

When I first read this paragraph I thought Intel would limit unlocked multipliers and overclocking in general to the completely unreasonably priced Extreme Editions. :argh:
Phew.

Actually, I'm guessing it will be an even mix at suppliers like Newegg, and heavily favor the normal versions at Best Buy and the like. 10% isn't much of a surcharge, but it's $20 wasted if you're not planning on overclocking. That's enough to people doing price comparisons to pick the cheaper option.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
How can a mostly fixed function GPU be DirectX 10 compliant?

freeforumuser
Aug 11, 2007
Sandy Bridge, AKA: finally a Intel GPU that is not to be avoided like the plague.

Even so, I'm still not a fan of Intel GPUs but hopefully this would drive down the mobile midrange GPUs down. Desktop wise I don't see why anyone would even bother with SB if they have already have overclocked ~4GHz i7/5/3/C2Q/Phenom II CPUs.

Landerig
Oct 27, 2008

by Fistgrrl

Alereon posted:



A New Kind of Overclocking

A CPU's clock speed is its Front Side Bus (FSB) speed multiplied by a value conveniently called the clock multiplier. On Intel CPUs since the Pentium II, this multiplier has been fixed by Intel at the factory (except on the most expensive Extreme Edition processors). As a result, overclocking has been accomplished by raising the FSB speed, which is generated by a clock generator chip on the motherboard. On the Sandy Bridge processors, this clock generator is now part of the chipset, and its frequency is fixed by Intel and cannot be changed by more than about 5%. This means that overclocking as we know it is now impossible. However, realizing the value of overclocking to enthusiasts, Intel will now be releasing some processor models with an unlocked multiplier. These models will have a letter "K" at the end of the model number, and will come at about a 10% price premium.

This kinda pisses me off, however I think nowadays overclocking a video card's GPU is at least as important. That's probably why Intel's only selling the unlocked ones for 10% more.

Watch someone find a way to unlock the low end models anyway, if anything just to say "I did it!".

movax
Aug 30, 2008

This is definitely going to kick-rear end for the portable market; the i7 I have in my MBP is already pretty sweet w/ integrated graphics, but now you get somewhat speedier CPU, greatly improved GPU, and reduced power consumption! Woo!

Question though, since I've just been sleeping through the Nehalem lifecycle: do the Xeons simply have the GPU portion disabled, or unconnected? I'd think it'd be kind of pointless for a 4-way Xeon machine to have 3 unused GPUs that the end-user still has to pay for.

Fats
Oct 14, 2006

What I cannot create, I do not understand
Fun Shoe

movax posted:

Question though, since I've just been sleeping through the Nehalem lifecycle: do the Xeons simply have the GPU portion disabled, or unconnected? I'd think it'd be kind of pointless for a 4-way Xeon machine to have 3 unused GPUs that the end-user still has to pay for.

As I understand it, the high end desktop and workstation processors (socket 2011 or whatever the replacement for socket 1366 is) won't have the GPU portion on the die at all.

Fats fucked around with this message at 18:43 on Sep 14, 2010

BlackMK4
Aug 23, 2006

wat.
Megamarm

mew force shoelace posted:

The return of the turbo button

The turbo button used to slow computers down, not speed them up.

Raptop
Sep 3, 2004
not queer for western digital

Combat Pretzel posted:

How can a mostly fixed function GPU be DirectX 10 compliant?

(Apparently) Other companies reuse their programmable hardware for doing operations that could have been compliant if done in fixed function hardware. Or maybe he's just comparing Gen architecture with Larrabee. It's hard to tell with intel architects.

KKKLIP ART
Sep 3, 2004

BlackMK4 posted:

The turbo button used to slow computers down, not speed them up.

It did speed them up when you pressed it a second time :v:

I'm actually really excited about Sandy Bridge. I don't do overclocking, and the current i5 stuff seems beastly, and this just an improvement on top of that. My C2D Wolfdale chip is still doing what I want it to do, but I'd like to retire it to server duty and get one of these when they drop.

BlackMK4
Aug 23, 2006

wat.
Megamarm
:v: True

Magnificent Quiver
May 8, 2003


BlackMK4 posted:

The turbo button used to slow computers down, not speed them up.

You're saying that when the button was in its activated position, it made the computer faster?

PUBLIC TOILET
Jun 13, 2009

mew force shoelace posted:

The return of the turbo button

:cmon: Taking steps backward in order to make leaps forward?

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance

KKKLIP ART posted:

I'm actually really excited about Sandy Bridge. I don't do overclocking, and the current i5 stuff seems beastly, and this just an improvement on top of that. My C2D Wolfdale chip is still doing what I want it to do, but I'd like to retire it to server duty and get one of these when they drop.

Same here. I have an E8400@3.6 which is still good enough for now but I'm lusting after one of those i5-2500K chips. Any word on pricing and when in Q1 2011 will these drop?

BlackMK4
Aug 23, 2006

wat.
Megamarm

Magnificent Quiver posted:

You're saying that when the button was in its activated position, it made the computer faster?
edit: I see what you did there, grammar nazi.

The turbo button 'on' ran the computer at normal speed, with it 'off' it was slower... like for games/apps that were synchronized to the clock rate.

BlackMK4 fucked around with this message at 23:08 on Sep 14, 2010

dexter
Jun 24, 2003

kapinga posted:

Actually, I'm guessing it will be an even mix at suppliers like Newegg, and heavily favor the normal versions at Best Buy and the like. 10% isn't much of a surcharge, but it's $20 wasted if you're not planning on overclocking. That's enough to people doing price comparisons to pick the cheaper option.

People buy processors from Best Buy? I didn't even know they sold them.

Mr VacBob
Aug 27, 2003
Was yea ra chs hymmnos mea

Alereon posted:

As an alternative, Intel has provided a media processor that performs pure hardware decoding of high definition MPEG2/VC-1 (WMV9)/AVC (H.264) video. In addition, it offers pure hardware encoding of AVC (H.264) video. The idea is that you'll be able to transcode video faster, and with lower power usage, on the CPU using the media engine than you would using a program that supports nVidia's CUDA or OpenCL on a dedicated GPU. This does require that you use a program that supports the Sandy Bridge media processor, but it's expected that support will be integrated into programs like Cyberlink's Media Show Espresso at launch, allowing performance comparisons with CUDA. We also don't know what kind of video quality it will produce, but probably likely that it won't be quite as good as dedicated software encoders like x264.

Not only would it be not as good as x264, it won't even be faster. My slower CPU gets the same speed (180fps) for small video sizes using the x264 "faster" preset. A CPU is a much better video encoder than any kind of GPU for all kinds of great reasons; the only reason to avoid one is power budgeting.

Pretty Cool Name
Jan 8, 2010

wat

We do I want a lovely graphics processor built into my CPU?

Fake edit: Other than to throw extra money at Intel for no reason.

B-Nasty
May 25, 2005

Pretty Cool Name posted:

We do I want a lovely graphics processor built into my CPU?

Fake edit: Other than to throw extra money at Intel for no reason.

lovely? As the OP said, this will seriously put a dent in the low-mid end graphics card sales. There'll always be the X-treme gamer bunch that will pay $400+ to have the fastest GPU hot off the assembly line, but for 90% of computer users, a dedicated graphics card is just one more thing to worry about breaking/compatibility issues.

The minimal Windows 7 graphical eye-candy doesn't require much, and the 3D GUI revolution has yet to (ever) take a real hold. The fact that browsers are only just beginning to take advantage of the GPU for page rendering shows just how little super fast GPUs matter to most.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
The important thing here is its not lovely. Is it terrible compared to a discrete option? yes.

From the documentation its very much on par with current AMD on board offerings. However this is not targeted at you. Its for inexpensive PCs and businesses.

e:fb

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Mr VacBob posted:

Not only would it be not as good as x264, it won't even be faster. My slower CPU gets the same speed (180fps) for small video sizes using the x264 "faster" preset. A CPU is a much better video encoder than any kind of GPU for all kinds of great reasons; the only reason to avoid one is power budgeting.

I would guess that a specialized "media encoder" chip would be optimized to be a media encoder, rather than being a locked-down GPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Mr VacBob posted:

Not only would it be not as good as x264, it won't even be faster. My slower CPU gets the same speed (180fps) for small video sizes using the x264 "faster" preset. A CPU is a much better video encoder than any kind of GPU for all kinds of great reasons; the only reason to avoid one is power budgeting.
It was actually clocked at ~400fps, and that includes decoding, scaling, and encoding simultaneously. Keep in mind also that since this is being done in dedicated fixed-function hardware, there won't be much of a performance impact on the system. You could have a video transcoding in the background while playing a game without the performance of either being impacted. Besides, the current generation of GPU-accelerated encoders look like crap, and most people don't really care if they're just putting videos on their mobile devices or uploading to Youtube.

movax
Aug 30, 2008

Fats posted:

As I understand it, the high end desktop and workstation processors (socket 2011 or whatever the replacement for socket 1366 is) won't have the GPU portion on the die at all.

Ah this makes sense, thanks. I'm pumped for Sandy Bridge; my first machine was a Pentium (family), then my personal boxes have gone P3 1.11GHz -> C2D E6600. I think it's almost upgrade time, even though my E6600 still does everything I need.

That's kind of the thing after the C2D IMHO; if you forget about games for a second, these newer CPUs aren't really opening too many new doors...they just help you get existing stuff done waaay faster. Which is great and all, but it's not like a night and day "holy poo poo I have a C2D, now I can actually finish encoding a H.264 movie in this lifetime!"

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

movax posted:

Ah this makes sense, thanks. I'm pumped for Sandy Bridge; my first machine was a Pentium (family), then my personal boxes have gone P3 1.11GHz -> C2D E6600. I think it's almost upgrade time, even though my E6600 still does everything I need.

That's kind of the thing after the C2D IMHO; if you forget about games for a second, these newer CPUs aren't really opening too many new doors...they just help you get existing stuff done waaay faster. Which is great and all, but it's not like a night and day "holy poo poo I have a C2D, now I can actually finish encoding a H.264 movie in this lifetime!"

Yeah, I know what you mean. My C2D E8400@3.25 isn't setting any records, but that and a 1 GB Radeon 4850 let me do everything I want and pretty darn well. Just about the only things upgrading would do for me are speed up 5 DVD encodes a year, run Folding@Home faster, and getting an even greater excess of frames per second on the games I run, or maybe stabilizing newer/less-well-behaved games at 60 fps instead of 30. And I built this machine roughly two weeks after the E8400 came out. Even a now-entry-level 4 GB of RAM doesn't really hold me back on anything.

Things have changed in ten years. No matter what I decide to do, I can manage it, and I can fit the pauses into times of my life when I'm doing other things anyway (or just multitask around it). I helped a family friend pick a new computer recently, jumping from a poorly-aging P4 Dell Dimension to a Core i5 entertainment laptop. What a difference that was. I don't think I'll ever experience something like that.

Will we never again feel the thrill of a desktop or laptop upgrade with a significant boost in capability? Are we doomed forever to only get our "new toy" excitement when Apple releases their version of a previously unpopular product and revitalizes that market?

:smith:

B-Nasty
May 25, 2005

Factory Factory posted:

Will we never again feel the thrill of a desktop or laptop upgrade with a significant boost in capability? Are we doomed forever to only get our "new toy" excitement when Apple releases their version of a previously unpopular product and revitalizes that market?

:smith:

Got SSD?

The newest ones are *significantly* faster than even the best spinning platter drives, and will make your computer feel like a different machine all together. Once these start getting cheap enough to become the default (non-media) drives, we'll probably see all kinds of cool built in RAID/optimized to hell craziness.

Doc Block
Apr 15, 2003
Fun Shoe
How is a GPU with no shader support anything other than useless? Even the ancient integrated GPU in my parents' computer can do shaders well enough to have Aero Glass turned on in Windows and run the handfull of old games they play.

Even the GPU in my iPhone can do shaders.

wolrah
May 8, 2006
what?

B-Nasty posted:

Got SSD?

The newest ones are *significantly* faster than even the best spinning platter drives, and will make your computer feel like a different machine all together. Once these start getting cheap enough to become the default (non-media) drives, we'll probably see all kinds of cool built in RAID/optimized to hell craziness.

Right on the money. A friend of mine just swapped his main boot drive for a 60GB Sandforce-based SSD and it's night and day. I'm just waiting for a 120+GB model I can afford before I do exactly the same thing. We both have similar computers, but at a LAN if we both click Starcraft II at the same time his is up and running before mine even shows a loading screen. It's absurdly fast.

Anand from Anandtech has noted that he has some of the old SSDs that were fast for reads but no good for multitasking in USB enclosures to do OS installs and that a SSD to SSD Windows 7 install is under 3 minutes from first boot to first reboot. Seek times may not seem like much, but apparently they add up pretty quickly as far as user experience is concerned, since a good SSD-equipped machine feels like it just has no delay.

KKKLIP ART
Sep 3, 2004

I think for bulk storage, platter drives will be supreme for a while. There was a nice 2TB drive on Amazon.com for like 80$ the other day. The dollar : gigabyte ratio on that is insane. I'm also waiting for a reasonably priced 120+ GB SSD (I may be waiting for a while) for a boot and some app drive. Ive watched enough of those "boot windows and launch 150 program" videos to see how stupidly fast they are.

Un-l337-Pork
Sep 9, 2001

Oooh yeah...


So, if I have an E8400, a Radeon 4870, a Vertex 2 SSD, and 8GB of RAM, what do I need to upgrade next? I play computer games and like to be able to play whatever I want at 1680x1050. I'm thinking that I should buy a 5850 or a 5870 when the price drops a bit?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Doc Block posted:

How is a GPU with no shader support anything other than useless? Even the ancient integrated GPU in my parents' computer can do shaders well enough to have Aero Glass turned on in Windows and run the handfull of old games they play.

Even the GPU in my iPhone can do shaders.
The GPU in Sandy Bridge has shader support, it's fully DirectX10 compliant, hence why it can run modern games with decent support, they just made everything that COULD be fixed-function, to reduce power usage and complexity. Here's the page of the Anandtech article with details. It actually comes very close to executing DirectX10 API instructions directly, which is an interesting approach.

Un-l337-Pork posted:

So, if I have an E8400, a Radeon 4870, a Vertex 2 SSD, and 8GB of RAM, what do I need to upgrade next? I play computer games and like to be able to play whatever I want at 1680x1050. I'm thinking that I should buy a 5850 or a 5870 when the price drops a bit?
I'd recommend taking this to the System Building, Upgrading, and Parts Picking Megathread.

TOOT BOOT
May 25, 2010

I imagine this will provide a bit of a boost to PC Gaming as soon as these start filtering down into pre-built PCs.

movax
Aug 30, 2008

B-Nasty posted:

Got SSD?

Last de-rail: yeah, the storage market is currently where the "holy poo poo, night and day" difference is at, thanks to SSD. Don't have to worry about mechanical failure, you get bitchin' fast speeds, etc.

On-topic: I really appreciate the built-in GPU because 1) battery life on portables should be far better, 2) integrated GFX that aren't awful, 3) when spec'ing out a "low-end" machine for grandman/business, you don't have to compromise and forgo Aero/future HW acceleration due to cost. You get that GPU with the CPU.

Serious gamers can scoff at it, but to me, being able to just drop a CPU in a new build for the parents and know that I'm not closing doors to pretty GUIs is a pretty sweet side benefit (that you can get with either Intel or AMD now I suppose).

The on-board GPU does eat a chunk of your system RAM though, correct? Or can mobo manufacturers slap 128MB/256MB of RAM on-board and wire that straight to the GPU? I think my AMD 780G-based mobo lets you choose between eating system RAM or some dedicated RAM on the mobo.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

movax posted:

Last de-rail: yeah, the storage market is currently where the "holy poo poo, night and day" difference is at, thanks to SSD. Don't have to worry about mechanical failure, you get bitchin' fast speeds, etc.

On-topic: I really appreciate the built-in GPU because 1) battery life on portables should be far better, 2) integrated GFX that aren't awful, 3) when spec'ing out a "low-end" machine for grandman/business, you don't have to compromise and forgo Aero/future HW acceleration due to cost. You get that GPU with the CPU.

Serious gamers can scoff at it, but to me, being able to just drop a CPU in a new build for the parents and know that I'm not closing doors to pretty GUIs is a pretty sweet side benefit (that you can get with either Intel or AMD now I suppose).

The on-board GPU does eat a chunk of your system RAM though, correct? Or can mobo manufacturers slap 128MB/256MB of RAM on-board and wire that straight to the GPU? I think my AMD 780G-based mobo lets you choose between eating system RAM or some dedicated RAM on the mobo.

Systems shipping with 2gb of ram aren't going to be slowed in any real way by 6% of system ram being allocated to desktop composition. Once you're up to 4gb it doesn't matter in the slightest. You can have all the dedicated GPU ram in the world you want, but this thing isn't going to be powerful enough to drive anything with it so you might as well just use the system memory that is already there and if you want anything more intensive than Aero or Flash HW acceleration then get real dedicated graphics.

movax
Aug 30, 2008

BangersInMyKnickers posted:

Systems shipping with 2gb of ram aren't going to be slowed in any real way by 6% of system ram being allocated to desktop composition. Once you're up to 4gb it doesn't matter in the slightest. You can have all the dedicated GPU ram in the world you want, but this thing isn't going to be powerful enough to drive anything with it so you might as well just use the system memory that is already there and if you want anything more intensive than Aero or Flash HW acceleration then get real dedicated graphics.

I recalled that earlier integrated graphics chipsets (like, 915GL or older), the loss of a chunk of your system RAM wasn't terrible, but rather there was a parasitic/general loss of memory bandwidth due to the GPU sharing some of that memory.

Or maybe that was just in the silly Sandra synthetic memory benchmarks, which no one really cared about except in overclocking e-peen contests.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

movax posted:

Or maybe that was just in the silly Sandra synthetic memory benchmarks, which no one really cared about except in overclocking e-peen contests.

That. We're talking about systems that just need to do the job "well enough". And since the memory controller is now moved in to the CPU and DDR3 is here, I seriously doubt memory bandwidth is going to be saturated in most configurations.

Adbot
ADBOT LOVES YOU

Doc Block
Apr 15, 2003
Fun Shoe

Alereon posted:

The GPU in Sandy Bridge has shader support, it's fully DirectX10 compliant, hence why it can run modern games with decent support, they just made everything that COULD be fixed-function, to reduce power usage and complexity. Here's the page of the Anandtech article with details. It actually comes very close to executing DirectX10 API instructions directly, which is an interesting approach.
I'd recommend taking this to the System Building, Upgrading, and Parts Picking Megathread.

Kind of annoying then that the article you quoted says the hardware is fixed-function only when they actually just meant no GPGPU support but still supports DirectX/OpenGL shaders.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply