New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

PerrineClostermann posted:

So this popped up on my news feed...

Am I correct in assuming this doesn't bode well for AMD?

That's a particularly dour reporting job. AnandTech and Tech Report are much more neutral-to-positive.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

He was pretty much just interim president anyway, wasn't he? I'm one of the more consistent "AMD is totally hosed in the long run for a huge list of reasons" and I don't see this leaping out as particularly bad news.

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva
edit: Wrong thread. Nothing to see here.

JawnV6
Jul 4, 2004

So hot ...

Menacer posted:

They're aware that IBM's engineers have produced microarchitectures that perform worlds better than anyone working in the ARM world so far.

A little late to this, but are any of them still there? I really enjoyed The Race for a New Game Machine, but from more recent news it doesn't sound like that culture is supported any more.

Which really sucks for a lot of reasons. We all agree that ISA is irrelevant. Decode takes a tiny fraction of die space. But if you could make a new design from the ground up, like what happened at IBM in the early '00s, there are some out-there uarchs that might be competitive. We'll never know until there's Arduino-level tools for playing with new architectures.

Rastor
Jun 2, 2001

POWER architecture has been around for some time and claims to be gearing up to face down Intel from the high end, while ARM continues to challenge on the low end and even MIPS making some noise as they are expecting to be supported in the next Android release.

Will they be successful? Hopefully at least enough that it benefits consumers.

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!

JawnV6 posted:

A little late to this, but are any of them still there? I really enjoyed The Race for a New Game Machine, but from more recent news it doesn't sound like that culture is supported any more.

Which really sucks for a lot of reasons. We all agree that ISA is irrelevant. Decode takes a tiny fraction of die space. But if you could make a new design from the ground up, like what happened at IBM in the early '00s, there are some out-there uarchs that might be competitive. We'll never know until there's Arduino-level tools for playing with new architectures.
Some are, some aren't. My point was more that: current designs from IBM perform far better at the high end than what current ARM designs can do. This will likely be true not just for this generation, but through the latter part of the decade. Even if IBM's microarchitecture stagnates (I don't think it will, completely), the ARM design houses are starting the race very far behind.

As an example, the ARM A57 (which Qualcomm picked over their custom designs this gen not just to pick up 64-bit) is basically straight out of Hennessy & Patterson. All of the thousands of little iterative things you do to squeeze performance out of a core don't exist yet. This won't even begin to change until their 2015-2016 design refresh.

To be clear, I'm not saying that ARM design houses can't hit that performance level. It will take engineering time and money, however, and most of these folks are designing for smart phones in bulk, tablets at the high end, and maybe dense mid-performance servers. The companies pushing for the latter will be the ones that might find their designs in the HPC market -- but for now, it looks like Nvidia is betting the near term on POWER designs. (With Samsung dropping their ARM server designs and joining OpenPOWER, they might be doing the same for servers.) I wouldn't expect Denver, the closest to your desire for a fresh uarch and which is very much targeted at smartphones, to be optimized for HPC, and that likely won't change for a few generations at the earliest.

As an aside, OpenPOWER is an amazing conglomeration of politics. The POWER division of IBM wants markets that are not the sales and services divisions, because they get hosed on profit margins there (e.g. sales division gets huge bonuses for bringing in massive profits, while the hardware division gets yelled at for not making any money when all their hardware goes to Sales at cost).

Google wants beige box POWER servers built by companies like Tyan in order to force Intel to drop Xeon prices, as they're one of the few companies that would follow through with the threat to switch their internal stack to a different ISA. (They'll probably let Tyan and IBM eat poo poo once Intel gives them better deals, though)

Nvidia and Mellanox want SoCs targetted at HPC with both GPGPU and communication on-chip, because one of the big complaints coming from the big HPC customers is that discrete cards sitting over a PCIe bus are awful. Knight's Landing sitting in a QPI-enabled socket with all of the system's DRAM will basically wreck the discrete accelerator market in supercomputers (and IIRC, KNL will have on-chip communication hardware as well). HPC is likely why Altera has hopped on, as well. My reasoning above is why I believe they won't be building ARM-based SoCs yet.

Menacer fucked around with this message at 04:05 on Oct 10, 2014

No Gravitas
Jun 12, 2013

by FactsAreUseless

JawnV6 posted:

We'll never know until there's Arduino-level tools for playing with new architectures.

You can get quite far with FPGAs. It isn't quite as simple, but considering you are designing a loving CPU...

(Two cool "actually used in the wild" ISA CPUs implemented here. Next up: Coldfire!)

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Rastor posted:

POWER architecture has been around for some time and claims to be gearing up to face down Intel from the high end, while ARM continues to challenge on the low end and even MIPS making some noise as they are expecting to be supported in the next Android release.
MIPS has been supported for a while, MIPS64 is the new hotness.

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!

No Gravitas posted:

You can get quite far with FPGAs. It isn't quite as simple, but considering you are designing a loving CPU...

(Two cool "actually used in the wild" ISA CPUs implemented here. Next up: Coldfire!)
It's much harder than this if you're trying to analyze the performance implications of microarchitectural choices. First, it's not very useful to have a processor design without all of the support infrastructure (memory controllers and DRAM, I/O, an OS, etc.) and all the infrastructure for projects like OpenRISC are "well, it works I guess" levels of performance accuracy. You need a booting OS and working tools infrastructure, too, because microbenchmarks can only get you so far.

Even if you did have a working system, though, if your DRAM is running at DDR2 or DDR3 speeds and your FPGA-implemented processor is running at 100 MHz, the performance numbers you get will be all out of whack. You'll have too much bandwidth and too little memory latency, for instance. You wouldn't optimize this design the same way you would optimize a design trying to run 30x as fast and with 4-16x as many cores sharing the memory controller.

OpenSPARC ran into this problem -- you could only fit a single core onto the Xilinx FPGA they would sell to you. This was out of 8 that you would find in a real design. In addition, their memory controller, I/O, etc. were all emulated in software on the FPGA's embedded hard cores. And this was an in-order core. I can't, off the top of my head, remember any projects that used OpenSPARC for performance analysis.

The other major problem is that, after you've implemented all of this in e.g. Verilog, it's very difficult to change things. You can't go back into a design and quickly sweep through a lot of design space options. Making a more generic hardware descriptions language was the goal of the bluespec project, but I haven't seen many people using it to search uarch options.

Jawn, maybe you would be interested in FabScalar.

JawnV6
Jul 4, 2004

So hot ...

Menacer posted:

To be clear, I'm not saying that ARM design houses can't hit that performance level. It will take engineering time and money, however, and most of these folks are designing for smart phones in bulk, tablets at the high end, and maybe dense mid-performance servers. The companies pushing for the latter will be the ones that might find their designs in the HPC market -- but for now, it looks like Nvidia is betting the near term on POWER designs. (With Samsung dropping their ARM server designs and joining OpenPOWER, they might be doing the same for servers.) I wouldn't expect Denver, the closest to your desire for a fresh uarch and which is very much targeted at smartphones, to be optimized for HPC, and that likely won't change for a few generations at the earliest.
I used to think, re: the x86 vs. ARM debate, that x86 would shrink down and have this giant 'toolbox' of features to pick and choose from since they'd done much bigger, full-featured designs. Like the tradeoffs between 4-2-2-2 decoding vs. 4-4-1 were common knowledge without the need to prototype or investigate. While ARM was having to grow up and out and build the toolbox as they went and choose where R&D efforts have to be spent.

From my new position, I'm not so certain the "feature toolbox" is such a gamechanger. It's certainly very useful and helpful. But there's huuuuuge benefits to working in an ecosystem. Think about the debug toolchain. IBM has one team delivering it. ARM has 3~4 companies actively fighting for that slice of the pie. A lot of that is for downstream consumers, but the tooling of an ecosystem is going to beat a monolithic entity.

Menacer posted:

As an aside, OpenPOWER is an amazing conglomeration of politics. The POWER division of IBM wants markets that are not the sales and services divisions, because they get hosed on profit margins there (e.g. sales division gets huge bonuses for bringing in massive profits, while the hardware division gets yelled at for not making any money when all their hardware goes to Sales at cost).
lol

No Gravitas posted:

You can get quite far with FPGAs. It isn't quite as simple, but considering you are designing a loving CPU...
I picked those points with a thorough understanding of both. Pointing out how far away we are from that level of integration reinforces my point. Menacer raised some pertinent system level modeling concerns. There's a giant gap between a hypothetical block diagram of datapaths and an CPU on a shelf that can be benchmarked. That gap includes a huge team under market pressures. What's possible if the team wasn't necessary? What's possible if profitability wasn't necessary? Minor features that make the slightest hiccup get killed in a risk-averse environment. Radical rearchitectures aren't even discussed any more. Feature descriptions that begin with "Microbenchmarks won't benefit, but full software will see 25%..." don't make it off an architect's whiteboard.

I want a smart agent that can arbitrarily modify any cache line that passes it with a 1 cycle penalty. I want two implementations of the same ISA that can transfer uarch state through a sideband. I want to expose that knob to my compiler instead of hiding things behind a DVFS table hack with thousands of cycles to shuffle things over. I want my DDR controller to support atomic operations so that my cards can set flags without 8 round trips.


Menacer posted:

The other major problem is that, after you've implemented all of this in e.g. Verilog, it's very difficult to change things. You can't go back into a design and quickly sweep through a lot of design space options. Making a more generic hardware descriptions language was the goal of the bluespec project, but I haven't seen many people using it to search uarch options.
One space that bluespec shines in is test collateral. You're quickly able to generate a responsive interface to a new protocol interface just specifying some rules about transaction handling, without the need for a designer to write each real layer. My brief interaction with it made it seem like it was abstracted above the ability to specify uarch details to sweep through though.

Other metaphors: "Don't tell me where to put the studs, just tell me how many rooms you want", "LINQ lets you ask the compiler for 'what' data you want, instead of specifying 'how' to generate it operation by operation"

Menacer posted:

Jawn, maybe you would be interested in FabScalar.
It looks interesting. I'm probably too rusty on all this to make good sue of it though :/

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

JawnV6 posted:

I used to think, re: the x86 vs. ARM debate, that x86 would shrink down and have this giant 'toolbox' of features to pick and choose from since they'd done much bigger, full-featured designs. Like the tradeoffs between 4-2-2-2 decoding vs. 4-4-1 were common knowledge without the need to prototype or investigate. While ARM was having to grow up and out and build the toolbox as they went and choose where R&D efforts have to be spent.

From my new position, I'm not so certain the "feature toolbox" is such a gamechanger. It's certainly very useful and helpful. But there's huuuuuge benefits to working in an ecosystem. Think about the debug toolchain. IBM has one team delivering it. ARM has 3~4 companies actively fighting for that slice of the pie. A lot of that is for downstream consumers, but the tooling of an ecosystem is going to beat a monolithic entity.

lol

I picked those points with a thorough understanding of both. Pointing out how far away we are from that level of integration reinforces my point. Menacer raised some pertinent system level modeling concerns. There's a giant gap between a hypothetical block diagram of datapaths and an CPU on a shelf that can be benchmarked. That gap includes a huge team under market pressures. What's possible if the team wasn't necessary? What's possible if profitability wasn't necessary? Minor features that make the slightest hiccup get killed in a risk-averse environment. Radical rearchitectures aren't even discussed any more. Feature descriptions that begin with "Microbenchmarks won't benefit, but full software will see 25%..." don't make it off an architect's whiteboard.

I want a smart agent that can arbitrarily modify any cache line that passes it with a 1 cycle penalty. I want two implementations of the same ISA that can transfer uarch state through a sideband. I want to expose that knob to my compiler instead of hiding things behind a DVFS table hack with thousands of cycles to shuffle things over. I want my DDR controller to support atomic operations so that my cards can set flags without 8 round trips.

One space that bluespec shines in is test collateral. You're quickly able to generate a responsive interface to a new protocol interface just specifying some rules about transaction handling, without the need for a designer to write each real layer. My brief interaction with it made it seem like it was abstracted above the ability to specify uarch details to sweep through though.

Other metaphors: "Don't tell me where to put the studs, just tell me how many rooms you want", "LINQ lets you ask the compiler for 'what' data you want, instead of specifying 'how' to generate it operation by operation"

It looks interesting. I'm probably too rusty on all this to make good sue of it though :/

Blue spec was at one point haskell hdl

Arch does not matter at all on the server space because I guarantee you that the vast majority of code is some lob junk running on a vm on a vm on a vm

What matters is tco in the aggregate data center but even that's not being measured by programmers and in all honesty there is not a whole lot a new general purpose Isa is gonna do

What not how is a very powerful technique to model a lot of domains and it's not surprising it helps hardware design out

Would be interesting to see what can be done. The mill vapor ware is kind of interesting just for that alone

JawnV6
Jul 4, 2004

So hot ...

Menacer posted:

Jawn, maybe you would be interested in FabScalar.

I ended up requesting access to the group for this. Someone peeked at my Linkedin, haven't heard anything else from it though.

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!
The student probably graduated and never updated the owner of the group.

Try emailing Eric or one of his current grad students to see what the hell is up.

Rastor
Jun 2, 2001

AMD just announced earnings. Profits down 65%, they are going to lay off 7% of staff (about 700 people).

Paul MaudDib
May 2, 2006

TEAM NVIDIA:
FORUM POLICE

Rastor posted:

AMD just announced earnings. Profits down 65%, they are going to lay off 7% of staff (about 700 people).

In particular their GPU revenue is really starting to decline. That's one of their lifelines at this point, so that's not a good sign.

They really need to get their new GPU architecture out ASAP. Supposedly their fab has been struggling with the 20nm process node.

Panty Saluter
Jan 17, 2004

Making learning fun!
The Nvidia 9xx series might be the final nail in the coffin. Far as I can tell there just isn't a reason to buy anything else right now unless all you need is a (low margin) budget card. I hope AMD pulls through but it looks really bad.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Panty Saluter posted:

The Nvidia 9xx series might be the final nail in the coffin. Far as I can tell there just isn't a reason to buy anything else right now unless all you need is a (low margin) budget card. I hope AMD pulls through but it looks really bad.
If they're really dependent on 20nm, it bodes poorly; 28nm will have better yields for a long, long time, to the extent that when the AMD 20nm GPU does ship it probably won't be available anywhere in the short term anyway.

SwissArmyDruid
Feb 14, 2014
How likely is it that we are to see AMD make a concerted jump straight to Samsung's 14nm-or-whatever-sub-20nm process? GPU, APU, and CPU, I mean? I might actually entertain buying an AMD product at that point, but right now, I want to take my AMD and AMD rig and fling it off the top of my building. (no CPUs worth writing home about, gently caress atikmdag.sys and destinythegame.com's web video being a 100% guaranteed BSOD on my computer/drivers/video card/whatever.)

SwissArmyDruid fucked around with this message at 03:50 on Oct 18, 2014

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

SwissArmyDruid posted:

gently caress atikmdag.sys and destinythegame.com's web video being a 100% guaranteed BSOD on my computer/drivers/video card/whatever.
Vine videos on Tumblr in Firefox BSOD me within ~1 minute of watching them, but Chrome is immune. Might want to try switching browsers at least for some content/sites if this is the same for you.

SwissArmyDruid
Feb 14, 2014

Aleksei Vasiliev posted:

Vine videos on Tumblr in Firefox BSOD me within ~1 minute of watching them, but Chrome is immune. Might want to try switching browsers at least for some content/sites if this is the same for you.

Vine doesn't do any of that to me, but Chrome is the only browser I use and it isn't immune to BSOD.

Winifred Madgers
Feb 12, 2002

Over in the parts picking thread I'm waffling between a Celeron J1900 and an AMD APU of some kind for a low power HTPC/minimal gaming system. The only game I'm likely to play any of is Portal 2. I get pretty decent performance out of it, at least in a brief test, on my 8W 1 GHz A6-1450 netbook, but I'm not sure how that'll translate to a 1080p TV. I'm assuming a newer 10W 2 GHz Intel will outperform it CPU-wise, and I have a Radeon 6550 that is more than enough to carry the GPU end of things should it be necessary.

But if I go with an AMD, what would be a good starting point for not much more than the $70 all-in-one ASRock Q1900M for the Intel? It won't be powered on all the time so power use isn't as big a concern, although I do like the passive cooling on the Celeron. But if I can get significantly better performance from an AMD for not much more money, I think I might, especially if I can get an all-in-one and free up the possibility of mITX since I still need to fit in a capture card.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

EX-GAIJIN AT LAST posted:

Over in the parts picking thread I'm waffling between a Celeron J1900 and an AMD APU of some kind for a low power HTPC/minimal gaming system. The only game I'm likely to play any of is Portal 2. I get pretty decent performance out of it, at least in a brief test, on my 8W 1 GHz A6-1450 netbook, but I'm not sure how that'll translate to a 1080p TV. I'm assuming a newer 10W 2 GHz Intel will outperform it CPU-wise, and I have a Radeon 6550 that is more than enough to carry the GPU end of things should it be necessary.

But if I go with an AMD, what would be a good starting point for not much more than the $70 all-in-one ASRock Q1900M for the Intel? It won't be powered on all the time so power use isn't as big a concern, although I do like the passive cooling on the Celeron. But if I can get significantly better performance from an AMD for not much more money, I think I might, especially if I can get an all-in-one and free up the possibility of mITX since I still need to fit in a capture card.

your starting point would be Kabini:

http://www.anandtech.com/show/7933/the-desktop-kabini-review-part-1-athlon-5350-am1
http://www.anandtech.com/show/8067/amd-am1-kabini-part-2-athlon-53505150-and-sempron-38502650-tested

These are kinda like what intel is doing with the J1900.

further up would be the regular socket FM2 APUs.

Here is the J1900 to compare to:

http://www.anandtech.com/show/8595/the-battle-of-bay-trail-d-gigabyte-j1900n-d3v-asus-j1900i-c-review

Remember that the GPU needs of 1080p are vastly higher than a netbook screen that is probably 1366x768 at most.

Honestly, having taken a closer look at the performance, I think you should stick with a regular Haswell celeron, and a cheap motherboard, and throw in your old GPU. Failing that, I think the J1900 + your old GPU.

Kaveri APUs might be decent, but that would cost more since the APUs themselves are more expensive than Celerons, and given that you already have a GPU to reuse, the integrated GPU would have to outperform the 6550 to be worthwhile - you'd need to research it.

The Lord Bude fucked around with this message at 14:54 on Oct 18, 2014

Winifred Madgers
Feb 12, 2002

Worst case scenario I can just play on the netbook, since that I know already does well enough; I just hadn't really considered it would even be worth trying until now. Looking at the benchmarks it seems the J1900 is close to par with a moderate Core 2 Duo even in single thread, and that's really all I need still for this system. The only reason I'm even doing an upgrade is for a smaller case (my old one is an overclocked Pentium E2160 on a full ATX board in a mid tower, on a motherboard that has lost its NIC) and an SSD, plus the power supply is getting quite old, although in the last 3 years or so it's seen a lot less use. Performance-wise it's still fine, but moving to a mATX HTPC case means a new motherboard anyway, so I might as well spend my money on something new rather than trying to find a Socket 775 board at this point.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I love my ASRock Q1900-ITX, but I do not expect it to game. Portal 2 is a struggle for my laptop, which has an older Sandy Bridge CPU with Intel HD 3000 graphics, and the J1900's GPU is certainly not better. If you want local gaming (and not just Steam Home Streaming like I use, for which the J1900 works great), then I would definitely favor an AM1 APU. The CPU strength is actually equal, with a far better GPU. The downside is power consumption - 25W vs. 10W for Bay Trail D. Now, that 15W is important to me for an always-on NAS/HTPC, but for a system that actually gets turned off, you may struggle to give a poo poo.

Winifred Madgers
Feb 12, 2002

I let my girls leave on a 13W CFL in the hallway all night as a sort of night light; I'm not going to sweat something like that for a system that'll be on a couple of hours a day at most.

However, the 6550 does quite well in this game at 1080p already, so if the CPU power is equal I'm still favoring the Celeron. Part of me does wonder about longevity, I admit, and I loved the challenge of extreme overclocking the E2160. I nearly reached 3 GHz with a crazy overvolt, although for everyday use I backed it off to a more reasonable 50% overclock, and I obviously still like talking about it to this day, so the G3258 still holds some vanity/futureproofing appeal (as it were).

Something that occurred to me, and maybe I should take this back to the parts picking or overclocking thread, is actually undervolting a G3258 for better temps in a smallish case. I'm assuming that is still an option on overclocking motherboards.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

EX-GAIJIN AT LAST posted:

I let my girls leave on a 13W CFL in the hallway all night as a sort of night light; I'm not going to sweat something like that for a system that'll be on a couple of hours a day at most.

However, the 6550 does quite well in this game at 1080p already, so if the CPU power is equal I'm still favoring the Celeron. Part of me does wonder about longevity, I admit, and I loved the challenge of extreme overclocking the E2160. I nearly reached 3 GHz with a crazy overvolt, although for everyday use I backed it off to a more reasonable 50% overclock, and I obviously still like talking about it to this day, so the G3258 still holds some vanity/futureproofing appeal (as it were).

Something that occurred to me, and maybe I should take this back to the parts picking or overclocking thread, is actually undervolting a G3258 for better temps in a smallish case. I'm assuming that is still an option on overclocking motherboards.

Just run the 3258 at stock...temperatures aren't an issue. Or get one of the cheaper non overclocking pentiums.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".
Anyone else have this friend? He blows $1000+ at Frys and has to show off his build. I tried talking him into exchanging his CPU before opening everything up. He got a r9 270, so the slight benefits of getting an APU are irrelevant.

This is AMD's customer.

Only registered members can see post attachments!

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Civil posted:

Anyone else have this friend? He blows $1000+ at Frys and has to show off his build. I tried talking him into exchanging his CPU before opening everything up. He got a r9 270, so the slight benefits of getting an APU are irrelevant.

This is AMD's customer.



I've seen a few but luckily none are personal friends. They tend to be the type that are completely certain that larger numbers are better and that all comparisons, even between different chip architectures, are apples to apples. Being religiously brand loyal, even to a brand that had nothing amazing to offer until the Athlon, adds to their fervor. I've got an old Cyrix 586 that was made for this kind of person (in 1996 or 97 or whatever).

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars



We call this guy Travis.

I know a guy who knows exactly how bad an idea all of this is but does it anyway out of some sort of misplaced loyalty or something.

SwissArmyDruid
Feb 14, 2014

Civil posted:

Anyone else have this friend? He blows $1000+ at Frys and has to show off his build. I tried talking him into exchanging his CPU before opening everything up. He got a r9 270, so the slight benefits of getting an APU are irrelevant.

This is AMD's customer.



AMD's current offerings are what I recommend to people I don't like very much, but am obligated to support for whatever reason.

Beautiful Ninja
Mar 25, 2009

Five time FCW Champion...of my heart.

SwissArmyDruid posted:

AMD's current offerings are what I recommend to people I don't like very much, but am obligated to support for whatever reason.

Is you plan to slowly roast a person in his home using an AMD processor?

SwissArmyDruid
Feb 14, 2014

Beautiful Ninja posted:

Is you plan to slowly roast a person in his home using an AMD processor?

My plan is to trap them on 990FX.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
I dont have people that stupid in my life

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Maybe your friend is earning fat stacks by astroturfing.

Rastor
Jun 2, 2001

For those who are friends with idiots bragging about an AMD APU purchase, those just got price cuts.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

keyvin posted:

Maybe your friend is earning fat stacks by astroturfing.
That's cool, I'm paid off by Intel. Got my fanboy check last night in the mail.

canyoneer
Sep 13, 2005


I only have canyoneyes for you

Civil posted:

That's cool, I'm paid off by Intel. Got my fanboy check last night in the mail.

I used to be a fanboy, but once you take a life in defense of your fandom you become a fanman. No coming back from that.

There are a couple subreddits for building PCs, and people in there recommend AMD stuff all the time. I have no idea why.

SwissArmyDruid
Feb 14, 2014

canyoneer posted:

I used to be a fanboy, but once you take a life in defense of your fandom you become a fanman. No coming back from that.

There are a couple subreddits for building PCs, and people in there recommend AMD stuff all the time. I have no idea why.

I think that there is a niche that, for anyone with a modicum of computer experience, is occupied by the Pentium AE. Because why would you go for the Athlon II X4 whatever or FX 6- or 4-core bullshit, when you can get an AE and crank that poo poo to 4.2Ghz on air with a modest aftermarket cooler like a Hyper 212?

For anyone who *doesn't* have this experience, and if you're asking in a PC-building subreddit, you probably don't, sure, I can see where the AMD would make more sense on a non-overclocking price/performance basis. Because it's not really until you put the spurs to the G3258 that it shines.

SwissArmyDruid fucked around with this message at 18:11 on Oct 22, 2014

orange juche
Mar 14, 2012



SwissArmyDruid posted:

I think that there is a niche that, for anyone with a modicum of computer experience, is occupied by the Pentium AE. Because why would you go for the Athlon II X4 whatever or FX 6- or 4-core bullshit, when you can get an AE and crank that poo poo to 4.2Ghz on air with a modest aftermarket cooler like a Hyper 212?

For anyone who *doesn't* have this experience, and if you're asking in a PC-building subreddit, you probably don't, sure, I can see where the AMD would make more sense on a non-overclocking price/performance basis. Because it's not really until you put the spurs to the G3258 that it shines.

When I read the reviews stating that you could hit 4.2ghz comfortably on air with a modest $30 cooler, all I could think was

https://www.youtube.com/watch?v=6rL4em-Xv5o&t=73s

Adbot
ADBOT LOVES YOU

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

orange juche posted:

When I read the reviews stating that you could hit 4.2ghz comfortably on air with a modest $30 cooler, all I could think was

https://www.youtube.com/watch?v=6rL4em-Xv5o&t=73s

Hell, I've been running at 4.3GHz on a i2500k with a Hyper 212 for almost 3 years with zero issues. This machine has been bar none the best I've ever built.

  • Locked thread