|
mayodreams posted:Hell, I've been running at 4.3GHz on a i2500k with a Hyper 212 for almost 3 years with zero issues. This machine has been bar none the best I've ever built. Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable.
|
# ? Oct 23, 2014 06:01 |
|
|
# ? Dec 1, 2024 21:32 |
|
orange juche posted:Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable. Oh yeah, Sandy Bridge takes it like a champ. But sometimes you stop for noise and power reasons when you don't need to push it. Been running my 2500K @ 4.4 for ages happily.
|
# ? Oct 23, 2014 06:04 |
|
orange juche posted:Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable.
|
# ? Oct 23, 2014 06:07 |
|
Its a recurring theme that when people ITT want to look at something positive we start talking about Intel.
|
# ? Oct 23, 2014 11:07 |
|
keyvin posted:Its a recurring theme that when people ITT want to look at something positive we start talking about Intel. Poor AMD, they can't catch a break. On a positive note, I'll always fondly remember them, like 3Dfx. My AM386-SX40 was a cheap and fast 386, my K6-200 was decent, and my K6-2 450 overclocked and gave AMD the reputation of being a company that doesn't make you change motherboards (socket/super 7). Not that it matters now, sadly. ATi is a different story, pointless and ridiculous to add the two together historically.
|
# ? Oct 23, 2014 12:26 |
|
keyvin posted:Its a recurring theme that when people ITT want to look at something positive we start talking about Intel. And that's why the Pentium AE was such a genius masterstroke. It singlehandedly cuts the legs out from under AMD in the only remaining price/performance category that they had going for them.
|
# ? Oct 23, 2014 15:43 |
|
We can only hope Sieg and Son reaches out to AMD in the near future, I think they can leverage some great synergies between the two and take the whole business to the next level
|
# ? Oct 23, 2014 17:37 |
|
I mean, sure, the Intel/WY synthetics are better, but if you're on a budget the AMD-powered androids are still a great choice if you can handle the extra power usage and occasional radiation leak
|
# ? Oct 23, 2014 18:04 |
|
The Pentium AE really is pretty ridiculous. Since the pump on my Glacer 240L started to develop an obnoxious buzzing sound, I've temporarily put the stock HSF back on, and it's been fine at the same OC (4.6 @ 1.275V). I really don't play much of anything other than D3, but it's not gone above 80C.
|
# ? Oct 23, 2014 19:20 |
|
cisco privilege posted:I mean, sure, the Intel/WY synthetics are better, but if you're on a budget the AMD-powered androids are still a great choice if you can handle the extra power usage and occasional radiation leak I guess we know why they called it the APOLLO system
|
# ? Oct 23, 2014 19:35 |
|
keyvin posted:Its a recurring theme that when people ITT want to look at something positive we start talking about Intel. Oh, I dunno. I still love the 1090t machine I handed down to my parents. It even competes favorably against AMD's 2014 parts.
|
# ? Oct 23, 2014 20:22 |
|
Civil posted:It even competes favorably against AMD's 2014 parts.
|
# ? Oct 23, 2014 20:37 |
|
Civil posted:Oh, I dunno. I still love the 1090t machine I handed down to my parents. It even competes favorably against AMD's 2014 parts. That's not a good thing, and it's also the reason why I'm still on a 965 BE and 880FX.
|
# ? Oct 23, 2014 20:49 |
|
SwissArmyDruid posted:That's not a good thing, and it's also the reason why I'm still on a 965 BE and 880FX. You sure its not because you are a techno-masochist? GokieKS posted:The Pentium AE really is pretty ridiculous. Since the pump on my Glacer 240L started to develop an obnoxious buzzing sound, I've temporarily put the stock HSF back on, and it's been fine at the same OC (4.6 @ 1.275V). I really don't play much of anything other than D3, but it's not gone above 80C. It didn't run D3 at stock?
|
# ? Oct 24, 2014 01:39 |
|
keyvin posted:You sure its not because you are a techno-masochist? I still use a 965BE because until now, I haven't come across any use case that required an upgrade.
|
# ? Oct 24, 2014 01:55 |
|
keyvin posted:It didn't run D3 at stock? Not sure how you're drawing this conclusion from what I said. I bought the Micro Center G3258 + MSI Z97 combo specifically for OCing it while tiding me over until Haswell-E launch (still waiting for an Asus ROG X99 GENE motherboard). It did 4.6GHz easily with my Glacer 240L, and it's managed to keep that nearly 45% OC even with the stock HSF (though with better thermal compound).
|
# ? Oct 24, 2014 02:58 |
|
Ragingsheep posted:I still use a 965BE because until now, I haven't come across any use case that required an upgrade. I run a 955BE right now, for the same reason. I know that I would see a speedup in certain games, but it's not worth the money to me right now. And thankfully I've only had to compile the Linux kernel twice so far in my OS class...
|
# ? Oct 24, 2014 03:10 |
|
keyvin posted:You sure its not because you are a techno-masochist? My workload and the games I am currently playing have not yet demanded that I upgrade. I do, however, have a new machine budgeted for if and when Star Citizen comes out.
|
# ? Oct 24, 2014 03:25 |
|
Well, this could either be absolutely brilliant, or complete folly for AMD. It's hard to tell at this point. http://techreport.com/news/27259/cpu-startup-claims-to-achieve-3x-ipc-gains-with-visc-architecture It should be noted that AMD is a major investor in this venture, as is Mubadala (the company that owns GloFo, who just bought IBM's chip unit, read: RISC business that they have leverage with which to shift to VISC) If AMD can take this VISC architecture, and then integrate this with the already-existing HSA work they've done, yeah, they will completely obviate the need for things like OpenCL libraries, because the virtualized core's combined VISC/HSA middleware will ideally be composed of one or more CPU cores mixed with one or more GPU cores, and then just break out whatever appropriate work needs doing out to the GPU, all while presenting a single nondescript virtual core to applications for ease of programming. This could also mean an obviation of the need for making applications more multithreaded as well. Mad props for AMD if this was the end game all along. I'm excited. SwissArmyDruid fucked around with this message at 21:25 on Nov 4, 2014 |
# ? Nov 4, 2014 21:17 |
|
SwissArmyDruid posted:Well, this could either be absolutely brilliant, or complete folly for AMD. It's hard to tell at this point. Sounds a lot like hyper-threading on CPU scale to me. I can imagine this making more threads runnable. I cannot see how this would give you better single-threaded performance, especially not with such a large speedup as is claimed. And single-thread is where the battle for the desktop is being won. EDIT: Maybe it is like dynamic vectorization/recompilation and GPU offloading? Hello, Transmeta.
|
# ? Nov 4, 2014 21:21 |
|
No Gravitas posted:Sounds a lot like hyper-threading on CPU scale to me. Here is the Tom's Hardware writeup, FWIW.
|
# ? Nov 4, 2014 21:29 |
|
Yeah. Take something single threaded and make it run in parallel where possible. Neat idea, even though I remain a skeptic. I do note they measure the speedup in instructions per core per cycle. What is the clock speed then? EDIT: Look at their pipeline. 11 stages. Out of this, the execute phase takes one stage, unless it is a long latency or memory operation in which case you have two stages. That's either a lot of work done in a stage or a very simple ISA where you need a lot of instructions to do something. For this to run x86, this will run either very slowly clockwise, or will have enormously long pipelines when running fast. EDIT2: Unless they manage to do their data accesses before they hit the execute stage, during dispatch or something. Might be possible, I guess. No Gravitas fucked around with this message at 21:57 on Nov 4, 2014 |
# ? Nov 4, 2014 21:34 |
|
No Gravitas posted:Sounds a lot like hyper-threading on CPU scale to me. I can imagine this making more threads runnable. I cannot see how this would give you better single-threaded performance, especially not with such a large speedup as is claimed. And single-thread is where the battle for the desktop is being won. The term you're looking for is Speculative multithreading. It's been around in academia for awhile now. It's basically the next step in terms of cpu speculative execution, from instruction to thread level. Here's a fun blast from the past overview article about speculative multithreading from 2001. A few fun bits are the mini excerpts from Compaq and Cray. http://www.ece.umd.edu/~manoj/759M/SpMT.pdf Longinus00 fucked around with this message at 15:17 on Nov 5, 2014 |
# ? Nov 5, 2014 15:10 |
|
Today's poo poo that pisses me off is not work related. Banks releasing those Apps so you can take pictures of cheques to deposit them. My bank has one but surprise surprise it doesn't work so I still need to mail in my refund from Comcast like it is the 90's. Which leads on to if Comcast can take money from my account why can they not put it back in instead of sending me a cheque. Of course I know the answer and it is that they hope people lose/can't be arsed to cash the cheque.
|
# ? Nov 5, 2014 16:10 |
|
Yeah there's a lot of guarded language around it. What stuck out to me was "licensing and co-developing," so they want to be what ARM is now without a proven product. I helped a research project as an undergrad that did speculation at the loop level, splitting out each iteration to a different core with enough messages to keep track of shared memory between them. There's plenty of research about similar things, but I'm skeptical without anything in a consumer's hand running realistic workloads. Mobile's a great development just because there's an expectation of a recompile before moving something over. Might not be free, but there's no legacy binaries kicking around justifying anything. $125M doesn't strike me as amazingly well funded either. Team of 500 for 2 years/200 for 5, assuming they'll eventually want to tape something out? Maybe you could pare it down to 50 architects and get huge discounts on masks and such, to get the luxury of a B0? No Gravitas posted:EDIT2: Unless they manage to do their data accesses before they hit the execute stage, during dispatch or something. Might be possible, I guess.
|
# ? Nov 5, 2014 16:29 |
|
Longinus00 posted:The term you're looking for is Speculative multithreading. Funny, I should have heard about it. I also adore the fact that the article has about five times as much space dedicated to citations than to content. JawnV6 posted:Can't always know what memory's necessary before the prior instruction's out of execute I know, I was just trying to come up with some way to make it work. I guess I'd really like this to succeed, but I just don't see how it could work in the end.
|
# ? Nov 5, 2014 18:33 |
|
No Gravitas posted:I do note they measure the speedup in instructions per core per cycle. What is the clock speed then? This is also why IPC measurements by themselves are meaningless and why Linley's article is a bad PR puff piece. Figure 4 is especially ridiculous in light of their design running at such frequencies. He gives a bit of a handwave to the problems I mentioned here, but then goes on to just parrot their press release (of course, the fact that the press release was at Linley's own conference might have something to do with the light touch he gave them.) The major thing they're selling, the "virtual cores" appears at this stage to be nothing more than a clustered OoO back-end. A prominent example of this type of design shipped over 15 years ago. (Seriously, compare slide 4 of this talk to Figure 2 in the MPR writeup). Every wide OoO design you can find these days implements a clustered OoO engine because it lets you run faster and at lower power: fewer ports in the register files, simpler reservation stations, simpler dispatch logic, etc. They might also be sharing the backend between multiple threads. They're not exactly the first to implement SMT. Their current design (which, again, runs at ~350 MHz), is 2 clusters. The big lie-by-omission here is that they're implying that their design will easily scale up (e.g. see slide 11 of their Linley PC talk). They expect not only that their frequency will improve, but that they can easily and continually scale the number of back-end stages of their pipeline need to talk to one another. I'll believe it when I see it, as their claims that "this is mostly logistics" are naive or disingenuous. This is all very likely a desperate media blitz to pick up investors. The EE Times writeup pretty much says this pretty plainly. quote:[Lingareddy, CEO of Soft Machines] aims to close a "huge" Series C funding round to finance the startup for next three years. They've carefully put together marketing materials, cherry-picked benchmarks, shown enticing numbers with no underlying data or technical explanation, and hit all the right news sources in order to advertise their company without showing that they're behind on their designs (because building hardware is hard). As Jawn mentioned, they're talking about licensing. This is probably because that's the only way they can make money without burning through another 7 years trying to make a chip that works. Show off a couple of prototypes and hopefully get licensed or bought out. Now their actual novel inventions may be that the front-end of their pipeline does dynamic binary translation (akin to Transmeta or NV's Denver, which employed a boatload of Transmeta people). They may be able to use that to reduce the average amount of interconnect you need between your clusters -- if you can dynamically find medium-sized chunks of independent code, then you may be able to get away with having minimal interconnect across your clusters and make it easier to scale up. The fact that they're not showing any real data about this, to me, implies that they're finding this very difficult to accomplish, either in hardware/firmware or for general-purpose codes. This might be why they see good relative performance on libquantum, for example. This benchmark is easy to auto-parallelize. If their hardware frontend is doing this (and maybe vectorization as well) but they're only using one core for the other processors (even though a simple GCC pass could create the same type of parallel code and vectorize the data-parallel portions), then they'll show better performance while not really showing any data. Overall, they may have a good idea in there somewhere. But the data they're showing now reeks of desperation. They're coming out of stealth mode because the money ran out before they could get their hardware to perform. The numbers they're showing are pure fluff, and they're being very cagey about what, exactly, their hardware is supposed to be doing. If it's what I've claimed here (binary translation doing auto-parallelization), then I'm skeptical that it will ever work in the general case. If their big result is actually the numbers they're currently showing, then this is all a bunch of hogwash.
|
# ? Nov 6, 2014 06:15 |
|
Thanks, Menacer! I can always count on you to cut through the fluff and crap and zoom in on what's really important!
|
# ? Nov 6, 2014 07:02 |
|
JawnV6 posted:I want a smart agent that can arbitrarily modify any cache line that passes it with a 1 cycle penalty. I want two implementations of the same ISA that can transfer uarch state through a sideband. I want to expose that knob to my compiler instead of hiding things behind a DVFS table hack with thousands of cycles to shuffle things over. I want my DDR controller to support atomic operations so that my cards can set flags without 8 round trips. As for your memory controller doing your atomic operations for you, I believe that Micron's Hybrid Memory Cube can do this (See Section 9.10.3). The memory controller sits at the bottom of a 3D stack (cores talk with it over a packetized connection, rather than doing all of the DRAM timing, etc. themselves). You can have that memory controller do some interesting things with a single command.
|
# ? Nov 6, 2014 07:45 |
|
When the day comes to replace my 3570k AMD will have something worth buying, right guys? Right?
|
# ? Nov 7, 2014 03:07 |
|
Well, if the Zen uarch is worth anything, than maybe actually! If.
|
# ? Nov 7, 2014 04:57 |
|
Factory Factory posted:Well, if the Zen uarch is worth anything, than maybe actually! If. If... Intel stops all development in the meantime.
|
# ? Nov 7, 2014 05:39 |
|
El Scotch posted:When the day comes to replace my 3570k AMD will have something worth buying, right guys?
|
# ? Nov 7, 2014 06:01 |
|
So, imagine my surprise when I heard that it was AMD's HBM projects that they were working on with Hynix that actually panned out, as opposed to Nvidia's HMC. Now, let's face it, it's new technology. It's not going to be cheap. Nvidia may have even announced that they are even going to use AMD/Hynix's HBM, but they're going to be a year behind in getting these products out the door. Furthermore, like all new technologies the initial rollout is going to be expensive. But, 12 months down the line, when Nvidia starts migrating to stacked memory, we could see AMD APUs with a single stack of HBM on the package as dedicated graphics memory for use in Discuss.
|
# ? Dec 23, 2014 20:45 |
|
SwissArmyDruid posted:AMD's semi-custom silicon business, since they just won another contract with Nintendo to provide the hardware for entire ecosystem. Oh, nice, didn't hear about this. It would be amusing if Nintendo, with its forgotten and little-loved Wii U, suddenly had the most powerful of the 3 machines. Maybe they could include a controller designed for human hands with a good battery life!
|
# ? Dec 23, 2014 20:53 |
|
One of my favorite things to daydream about is imagining what the best AMD engineers could come up with if they had the same funding and process nodes as Intel. Or reverse that situation and take the amount of Intel engineers you could pay with AMD's salary limitations and give them GF or TSMC (when it was struggling with 32nm) to design for with a similar R&D budget. Given how incredibly mismanaged AMD was under the former CEO and how comparatively little cash they've had for R&D, you just have to KNOW there are some bad assed engineers on board to have gotten AMD to a point where they briefly delivered a superior platform vs Intel and managed to stay alive this long after all the anti-competitive practices Intel used against them. It reminds me of the space race in a way.
|
# ? Dec 24, 2014 14:50 |
|
Bloody Antlers posted:One of my favorite things to daydream about is imagining what the best AMD engineers could come up with if they had the same funding and process nodes as Intel. Everybody loves an underdog.
|
# ? Dec 24, 2014 18:01 |
|
They hired a bunch of guys from DEC that worked on the Alpha right around the time Compaq bought what was left of DEC. Those were the people that made the Athlon.
|
# ? Dec 25, 2014 05:15 |
|
El Scotch posted:When the day comes to replace my 3570k AMD will have something worth buying, right guys? The Kabini can be worth buying, in certain circumstances. If you want a low-power processor that'll do AES-NI, you don't have a ton of options. The Athlon 5350 is an OK laptop-level processor. High-power stuff (>50W)? Nah, buy a Pentium AE or an i5/i7.
|
# ? Dec 25, 2014 06:00 |
|
|
# ? Dec 1, 2024 21:32 |
|
thebigcow posted:They hired a bunch of guys from DEC that worked on the Alpha right around the time Compaq bought what was left of DEC. Those were the people that made the Athlon. That's what My Father From DEC has always claimed. "The Athlon XP/64 was just the practical commercialization of the Alpha architecture". Good to hear it's not just DadTales. a survivor who works for HP now Paul MaudDib fucked around with this message at 08:36 on Dec 25, 2014 |
# ? Dec 25, 2014 08:31 |