|
Alereon posted:Here's the article at TechReport. It's kind of depressing to see that my four year old Core 2 Quad Q9650 is still better than any AMD processor at gaming. Yorkfield was a beast. I had a Q9550 which I ran all the time at 3.8GHz. Infact, the only reason I upgraded to an 2500K is because I had some hard to pin down problem, which I think was possibly to do with the motherboard. That chip had a monstrous level 2 cache compared to even the stuff you get now.
|
# ? Aug 24, 2012 19:05 |
|
|
# ? Apr 25, 2024 21:17 |
|
Did I miss any news about the Desktop Ivy Bridge Core i3 processors being held back? I remember them slated for July/August but only the mobile versions are out at the moment. I can't google up anything other than price previews and "coming soon in June" old news.
|
# ? Aug 25, 2012 20:39 |
|
Berk Berkly posted:Did I miss any news about the Desktop Ivy Bridge Core i3 processors being held back? I remember them slated for July/August but only the mobile versions are out at the moment. I can't google up anything other than price previews and "coming soon in June" old news. I too was curious about this. I'm looking to build something with an I3 and figured I would wait for the newer generation. Everything I have found has indicated a Q3 release which is now.
|
# ? Aug 26, 2012 12:45 |
|
I'm thinking those dies are going to the laptop i3/i5 3000-series, which are currently available and a bit more profitable.
|
# ? Aug 26, 2012 13:31 |
|
How much is known about Haswell at this point? I'm trying to skip over an Ivy Bridge-build and wait for Haswell.
|
# ? Aug 27, 2012 02:18 |
|
Doh004 posted:I too was curious about this. I'm looking to build something with an I3 and figured I would wait for the newer generation. Everything I have found has indicated a Q3 release which is now. I've seen a bunch of stuff online saying that the Ivy i3s are going to launch in September.
|
# ? Aug 27, 2012 06:11 |
|
COCKMOUTH.GIF posted:How much is known about Haswell at this point? I'm trying to skip over an Ivy Bridge-build and wait for Haswell.
|
# ? Aug 27, 2012 08:22 |
|
COCKMOUTH.GIF posted:How much is known about Haswell at this point? I'm trying to skip over an Ivy Bridge-build and wait for Haswell. What do you have right now? Personally, I'd only be thinking of upgrading if I didn't already have a quad-core. Nehalem quads still are good enough for games, and I think even the Core 2 Quads with a healthy overclock can handle some newer titles. If I had a Q6600 or similar instead of a E6600, I'd probably still have it and would have upgraded to Haswell next year.
|
# ? Aug 27, 2012 14:57 |
|
movax posted:What do you have right now? Personally, I'd only be thinking of upgrading if I didn't already have a quad-core. Nehalem quads still are good enough for games, and I think even the Core 2 Quads with a healthy overclock can handle some newer titles. I have a Core 2 Duo e8400 now. I had looked at just swapping in a quad core Core 2 CPU but I think the prices are going up.
|
# ? Aug 28, 2012 00:31 |
|
XbitLabs has a news post with details about the new Intel Atoms coming in 2013, which just might be the first Intel Atom processors that are not obsolete garbage at the time of release. The new Atom CPUs will have four cores based on the Silvermont architecture, which is a new 64-bit low-power CPU architecture with out-of-order execution (like modern ARM Cortex and Core-series processors, unlike the current Atom and older ARM CPUs). The graphics could be dubbed Intel HD Graphics 1000, as it's exactly a quarter of an HD Graphics 4000, and includes hardware video transcoding (but probably not QuickSync, but maybe). There's also a supplemental decoder IP block with support for JPEG and Google WebM (VP8), which points to an eventual destination in smartphones. This is a real SoC, with all connectivity and I/O that's normally part of a chipset built into the die. There's also dual-channel DDR3 support, which is necessary to compete with the high bandwidth available to ARM processors. The most important competitor to this will be ARM SoCs based on Cortex A15. Cortex A15 represents a massive increase in performance-per-clock and the processing capabilities of ARM cores, stepping into the realm of desktop processors. The performance of Intel's graphics solution remains to be compared against next-gen multi-core GPUs on the ARM SoCs, such as those offered by PowerVR, Qualcomm, nVidia, and ARM themselves. Some key advantages Intel has are its 22nm process, which is at least a year ahead of the rest of the industry, and its significant lead on 64-bit adoption. ARM has yet to release 64-bit products, so how much success there will be for 64-bit ARM software on servers is yet to be seen. x64 server software is already available, which will ease the transition to low-power servers based on the upcoming Atom processors.
|
# ? Aug 29, 2012 02:33 |
|
Finally! The Core i3s and a few other goodies have just gone to market: http://www.cpu-world.com/news_2012/2012090202_Intel_launches_mid-class_and_budget_desktop_CPUs.html Berk Berkly fucked around with this message at 01:58 on Sep 3, 2012 |
# ? Sep 3, 2012 00:14 |
|
Berk Berkly posted:In not to long ago, the Core i3s and a few other goodies have just gone to market: Oh man, awesome timing.
|
# ? Sep 3, 2012 01:41 |
|
Alereon posted:XbitLabs has a news post with details about the new Intel Atoms coming in 2013, which just might be the first Intel Atom processors that are not obsolete garbage at the time of release. The new Atom CPUs will have four cores based on the Silvermont architecture, which is a new 64-bit low-power CPU architecture with out-of-order execution (like modern ARM Cortex and Core-series processors, unlike the current Atom and older ARM CPUs). The graphics could be dubbed Intel HD Graphics 1000, as it's exactly a quarter of an HD Graphics 4000, and includes hardware video transcoding (but probably not QuickSync, but maybe). There's also a supplemental decoder IP block with support for JPEG and Google WebM (VP8), which points to an eventual destination in smartphones. This is a real SoC, with all connectivity and I/O that's normally part of a chipset built into the die. There's also dual-channel DDR3 support, which is necessary to compete with the high bandwidth available to ARM processors.
|
# ? Sep 3, 2012 14:44 |
|
Uh, is there actually any evidence that ARM servers were any kind of popular already, or going to be anytime soon? It's a bit premature to cal Intel late.
|
# ? Sep 3, 2012 16:04 |
|
The low power server market is immature but there, Calxeda sells quad-core ARM Cortex A9 SoCs to system vendors for high-density servers, SeaMicro does the same for Atoms, and AMD has talked a lot about using its Brazos APUs as part of platforms leveraging low-power CPU cores and the graphics processor for compute. They also bought SeaMicro, but I don't know if anyone actually productized Brazos like that yet. By the time such servers are visibly popular Intel will have missed their best opportunity, so one would help they have products products available when companies start looking.
|
# ? Sep 3, 2012 20:28 |
|
I have a question regarding Thunderbolt on Intel motherboards: is there a reason for the lack of dual-port thunderbolt motherboards? As far as I can tell, there is only the one Gigabyte motherboard with two thunderbolt ports available. There's an Asus motherboard for which you can buy a thunderbolt expansion card which can pass-though DisplayPort into the thunderbolt connections it provides, but this would not allow on-board video to power two displays through thunderbolt as I'm yet to see a motherboard which provides more than one displayport output for onboard graphics. The descriptions of the Gigabyte motherboard say that two independent displays can be driven by the two thunderbolt ports, implying that the Intel onboard graphics can power more than one displayport output. Is there some sort of technological limitation of current generation thunderbolt hardware which makes dual thunderbolt infeasible, or do manufacturers see the configuration as unpopular? Is this something which may change with the next chipset/CPU release? I have a smaller thunderbolt question too: has there been any news about a thunderbolt peripheral which simply spins off a displayport connection and continues the daisy chain? Thus far I've only seen the overpriced thunderbolt hubs which usually present HDMI plus a bunch of other things like USB and audio. A device which just allows you to plug in a displayport monitor and continue the daisy chain would be great for allowing the use of more than one displayport monitor.
|
# ? Sep 4, 2012 00:05 |
|
Well, the short answer is that Thunderbolt is rare and expensive, as are TB-compatible peripherals. Hell, there aren't even any Thunderbolt displays that aren't the Apple one on the market yet. The controller chips and cables for Thunderbolt are loving expensive. That $50 Apple cable? That's pretty much what they actually cost, because each end of the cable has complex signalling chips. As Intel keeps iterating on Thunderbolt, it will get cheaper. Already, we're seeing slightly-more-sane devices built on the cheapest of the Cactus Ridge TB controllers. By the next generation, a lot more of the signalling circuitry should be in the TB controller itself, rather than in the cable, so it will be cheaper to produce and use TB.
|
# ? Sep 4, 2012 00:26 |
|
Install Gentoo posted:Uh, is there actually any evidence that ARM servers were any kind of popular already, or going to be anytime soon? It's a bit premature to cal Intel late. ARM's stock took a small hit last week when the CEO of EMC commented he saw no future for ARM in the server market.
|
# ? Sep 4, 2012 04:01 |
|
Does the thunderbolt spec..specify putting signalling devices in the cable?
|
# ? Sep 4, 2012 04:10 |
|
When my company was considering adding thunderbolt support on one of our products, we looked into the cost of bundling a thunderbolt cable in the box and decided it didn't really make sense unless we were willing to add something like $100 to the retail price just because of the cable. Plus another $100 for the controller chip. We decided not to add thunderbolt support
|
# ? Sep 4, 2012 04:19 |
|
dud root posted:Does the thunderbolt spec..specify putting signalling devices in the cable? It's not necessary per se, it's just that, in practice, moving DisplayPort 1.2 (some cables therefor already uses less-expensive signalling chips) and/or four lanes of PCI Express in a copper cable, you need some really loving good DSP. Without the chips (i.e., a passive cable instead of an active one), the cable could only reach about a foot and still run at the same speed. Here's a wordier Ars article on the matter. When they finally release a consumer optical Thunderbolt, the cables will get cheaper. Not cheap, because they're still fiber-optic cables, but cheaper.
|
# ? Sep 4, 2012 04:54 |
|
Alereon posted:SeaMicro does the same for Atoms Seamicro tried pretty hard to sell to my company before they got acquired by amd. When we got them to actually run real tests on their systems at best, in best case benchmarks, their atom based servers performed less than half as well per dollar or watt. This is hadoop compared to normal whitebox compute nodes (8 nehalem cores / 24 gigs of ram, gige interconnected). I'm not convinced you could find any real world workload that would actually save you money, rackspace, power, or whatever your most important thing to save by using Atom cpus.
|
# ? Sep 4, 2012 06:31 |
|
Bottom-tier VPS?
|
# ? Sep 4, 2012 06:39 |
|
Factory Factory posted:When they finally release a consumer optical Thunderbolt, the cables will get cheaper. Not cheap, because they're still fiber-optic cables, but cheaper.
|
# ? Sep 4, 2012 07:18 |
|
Aquila posted:Seamicro tried pretty hard to sell to my company before they got acquired by amd. When we got them to actually run real tests on their systems at best, in best case benchmarks, their atom based servers performed less than half as well per dollar or watt. This is hadoop compared to normal whitebox compute nodes (8 nehalem cores / 24 gigs of ram, gige interconnected). I'm not convinced you could find any real world workload that would actually save you money, rackspace, power, or whatever your most important thing to save by using Atom cpus.
|
# ? Sep 4, 2012 07:34 |
|
Aquila posted:I'm not convinced you could find any real world workload that would actually save you money, rackspace, power, or whatever your most important thing to save by using Atom cpus. The primary way I can imagine ARMs having staying power in the server market would be as directly executed or virtualized mobile workloads in a datacenter. The shift by clients to move to ARM CPUs in practice is pretty scary. What's funnier is how ARMH has performed mediocre despite a huge vested interest in so much of the industry in their SOC designs.
|
# ? Sep 4, 2012 13:50 |
|
Factory Factory posted:When they finally release a consumer optical Thunderbolt, the cables will get cheaper. Not cheap, because they're still fiber-optic cables, but cheaper.
|
# ? Sep 4, 2012 15:08 |
|
Aside from price (and probably cable durability as well), it's a lot easier to send power over copper.
|
# ? Sep 4, 2012 15:26 |
|
Zhentar posted:Aside from price (and probably cable durability as well), it's a lot easier to send power over copper. But it can't be too hard to add copper for power to an optical cable.
|
# ? Sep 4, 2012 19:36 |
|
Apparently pure copper is cheaper, once you account for Intel's cost in developing a sufficiently flexible fiber optic cable.
|
# ? Sep 4, 2012 20:08 |
|
Cost and durability are definitely the main issues. I work with lasers and fiber optics at work, and the biggest problem with fibers breaking is if the cable gets kinked, it'll break the fiber pretty easily. Cost factors in some too, plastic fiber is much cheaper than normal glass fiber, but still more expensive than copper.
|
# ? Sep 5, 2012 06:07 |
|
Haswell info from IDF. AnandTech liveblog archive (latest at top) Short version: Continue the SNB/IVB trends and continue to evolve the same microarchitecture and general features, but also expand the product range for the microarchitecture. Haswell will scale from tablet to top-end servers. Elsewhere, we've heard scuttlebutt that client workloads will run about 12% faster IPC vs. Ivy Bridge. Highlights:
|
# ? Sep 11, 2012 21:20 |
|
Bit more from AnandTech on Haswell's cache: The GPU will get up to 128 MB of dedicated cache. That's a lotta cache on a tiny chip. E: Are you guys not excited? Because I'm loving PUMPED!!! Factory Factory fucked around with this message at 01:55 on Sep 13, 2012 |
# ? Sep 13, 2012 01:47 |
|
Well I mean I'm going to need one to keep up with the times, you know
|
# ? Sep 13, 2012 04:05 |
|
Factory Factory posted:Bit more from AnandTech on Haswell's cache: What does this really mean though? What kinds of applications can benefit from 128mb of cache? GPGPU stuff? Certainly not any modern games right? Maybe photo editing where an entire image can be stored in cache and complex processing can visually update in real time?
|
# ? Sep 13, 2012 06:04 |
|
It's used along with the resource streamer to alleviate RAM bottlenecks. Current GPUs rely on a lot of local RAM and relatively small L1 and L2 caches for high-bandwidth operations. GK104 (Nvidia GeForce 680), for example, has 768 KB of L2 cache, which is not a lot by CPU standards. This cache is kept fed by transfers from VRAM at about 192 GB/s (VRAM itself is populated at no more than 16 GB/s). Meanwhile, a CPU IGP has to cope with only having as much bandwidth as can be spared from system RAM. On a dual-channel DDR3-1600 system, that means the IGP can only have a maximum of ~21 GB/s. Similarly-performing dedicated cards tend to have at least double that, and dedicated rather than shared. So how do you keep the cache filled so that the IGP can actually do work? If you want high performance, you can't rely on a lot of fast VRAM the way a dedicated card does. We need one more piece: how VRAM gets filled. Every time you want to transfer data from system RAM to VRAM, the CPU has to stop what it's doing, read data from RAM, then send that data to the GPU. When you're dealing with a PCIe GPU, that data is sent over the PCIe bus; the RAM is only needed to read data. But with an IGP that shares system RAM, this means that data is copied within the RAM via the CPU. Your effective bandwidth is half your RAM bandwidth, because it can write only as fast as it can read. Side note: a successful heterogeneous compute systems architecture would eliminate the "read then write again" problem, but you'd still have bottlenecks. And then the GPU has to act on the data. This requires filling its cache, so it goes in and starts doing reads and writes to its VRAM in system RAM. Except it's sharing that bandwidth with the CPU doing whatever it's doing. But oops, there goes another transfer from system RAM to VRAM. All at the same time, your mere 21 GB/s of dual-channel DDR3 is being hammered by reading/copying and by GPU operations, and also by CPU operations from the three other cores on the CPU. So what do you do if you want to build a high-performance GPU that shares system RAM? Option 1 is to vastly increase the RAM bandwidth. Stick an IGP on Sandy Bridge E with its quad-channel RAM. Except adding more and more RAM channels is expensive and difficult to do, it raises costs and power consumption all around the system. Plus somebody not using the IGP would find it to be wasted expense. Option 2 is Haswell's giant cache and resource streamer. That up-to-128MB of cache has an internal bandwidth of up to 512 GB/s. As long as data is in the cache, you don't have bandwidth problems, which is especially important for compute workloads. The resource streamer cuts down on the amount that the CPU and GPU have to hit system RAM, thus increasing the bandwidth available at any given time. It allows the CPU to dump data to the CPU in much larger chunks, which reduces the number of time-costly context switches that have to be performed. And having that much data on hand reduces the amount that the IGP has to call out to its VRAM cache in system RAM. So in short, that cache (plus the resource streamer) lets you run a high-performance GPU on limited RAM bandwidth.
|
# ? Sep 13, 2012 07:07 |
|
Factory Factory posted:So in short, that cache (plus the resource streamer) lets you run a high-performance GPU on limited RAM bandwidth. So if an app doesn't need more than 128mb of vram, will a Haswell 128mb GPU have the potential to outperform a mid/high end discrete GPU?
|
# ? Sep 13, 2012 07:21 |
|
No, because it still has a limited number of execution units to actually crunch the numbers. But it's forward-looking, and laying the groundwork for Broadwell's massive GPU revamp.
|
# ? Sep 13, 2012 07:31 |
|
With Haswell are there any changes for Thunderbolt and PCI-E 4x limitation? I really, really want to know when we will see this interface replacing docking stations and eGPUs becoming available.
|
# ? Sep 14, 2012 19:29 |
|
|
# ? Apr 25, 2024 21:17 |
|
Factory Factory posted:No, because it still has a limited number of execution units to actually crunch the numbers. But it's forward-looking, and laying the groundwork for Broadwell's massive GPU revamp. If so, I'd love to see something like this with a broadwell-based CPU in 2014, potentially you could have a selection of systems with a similar form factor for ~$500. Would make a hell of a "Steam Box". Excellent write-up explaining approaches to GPU bandwidth solutions on CPU's btw.
|
# ? Sep 14, 2012 19:41 |