New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Longinus00
Dec 29, 2005
Ur-Quan

evensevenone posted:

You're right, it it is just as difficult to write cross platform software as it was in 1998 when PPC NT was last relevant.

There were C compilers for both PPC and NT and x86 NT (and unix for many years prior). So long as you don't use anything specific to the architecture then code could port fine! Hint: The same problem exists today even with .Net

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan
How is GPGPU for AMD parts in linux? If this more or less requires the official AMD drivers to actually use the fusion part of the CPU I'm not going to be interested at all.

Longinus00
Dec 29, 2005
Ur-Quan

Alereon posted:

Phoronix has lots of tests of AMD hardware in Linux. Here's their review of AMD Fusion with open source drivers, though that was Brazos and not Llano. Why don't you want to use the Catalyst drivers?

That's just testing graphics performance, I see nothing there about compute. If I'm going to have to run Catalyst I'll just reboot into windows where it's faster anyway.

Devian666 posted:

I haven't had any issues with running computational stuff using the linux catalyst drivers. Is there a specific application that you need other drivers?

No, I was just looking playing around with it. What is it currently being programmed in?

Longinus00
Dec 29, 2005
Ur-Quan

Devian666 posted:

They delayed until September so let's hope so. It could take a while for stock to be available.

Welp!

http://www.xbitlabs.com/news/cpu/display/20110831102650_AMD_s_Highly_Anticipated_Bulldozer_Chips_Might_Face_Further_Delay.html

Longinus00
Dec 29, 2005
Ur-Quan

freeforumuser posted:

http://www.xbitlabs.com/news/cpu/display/20110901142352_Gigabyte_Accidentally_Reveals_AMD_s_FX_Launch_Lineup_Specs.html

First chip to finally break the 4GHz barrier, officially. Last near-candidate was the 3.8GHz P4 570 in Nov 2004.

Power6 has had 4Ghz+ chips for awhile now or are you restricting discussion to x86?

Longinus00
Dec 29, 2005
Ur-Quan

freeforumuser posted:

Found a pretty legit BD leak from PCWorld France.

Hmm, I wonder why the architecture is always so behind in games?

Longinus00
Dec 29, 2005
Ur-Quan

KillHour posted:

I've always wondered why everyone always seems to root for AMD. Is it nostalgia for :allears: The Good Ol' Days, or does everyone just love the underdog?

I know a lot of it is wanting the competition in the market, but I never saw much rooting for Intel back when AMD was on top.

AMD was never "on top". Having a better product than your competitor doesn't mean you're "on top".

Longinus00
Dec 29, 2005
Ur-Quan
It's starting to sound like bulldozer might be a gamble in the same way that netburst and itanium were a gamble for intel. Netburst and itanium were all designed with the assumption that the "performance deficiencies" would be overcome by some sort of scaling, clockspeed for netburst and crazy compiliers for itanium. Bulldozer looks like it's betting not on clockspeed but concurrency and eventually offloading more and more work onto the GPU. Even if BD flops I wouldn't be surprised if intel lifted some ideas off of it like they have been so good at doing in the past.

VVV

Thats the goals for the followups to bulldozer in the same family. The whole fusion thing.

Longinus00 fucked around with this message at 03:38 on Oct 3, 2011

Longinus00
Dec 29, 2005
Ur-Quan

WhyteRyce posted:

A possible explanation for some of the lackluster leaked performance numbers

http://www.xtremesystems.org/forums/showthread.php?275786-AMD-FX-8150-Bulldozer-finally-tested&p=4969164&viewfull=1#post4969164

Is this on LKML? Do you have links for the patch?

Longinus00
Dec 29, 2005
Ur-Quan
I found the thread that has the final patch versions (that I know about). Included are kernel build benchmarks.

https://lkml.org/lkml/2011/8/5/171

Longinus00
Dec 29, 2005
Ur-Quan

Alereon posted:

I would consider an 8-core processor that can't quite equal a quad-core to be a pretty serious failure. The HardOCP Cinebench numbers show Bulldozer BARELY beating a Phenom II X6, and losing slightly to the i7 2600K. Things are a bit better for POVRay, but I'd definitely say that multi-threaded performnace is far below expectations. I never really expected per-core performance to be good, but I at least thought it would win pretty handily in heavily multi-threaded integer workloads, and that is definitively not the case. I would have also hoped that per-thread floating point performance would go up over Phenom II, but instead it seems to have dropped, pretty seriously when you consider that Bulldozer has a 200-500Mhz clock speed advantage, depending on how effective Turbo Core is.

Bonus Edit: Anandtech's review has been delayed because Anand Lal Shimp was hospitalized yesterday (he's fine now). They're hoping to have it up "soon", which should hopefully mean tonight.

Cinebench and POVRay are integer workloads?

Longinus00
Dec 29, 2005
Ur-Quan

WhyteRyce posted:

Why is Windows 7 getting poo poo when Linux needed a kernel patch as well?

The linux kernel patch only increased performance by up to 10%, a 40% performance increase is pretty crazy all things considered. AMD has had previous problems with windows kernel scheduling (aka phenom cool and quiet problem) so I suppose this isn't unprecedented. That also a problem that core parking would have solved, it just took until intel got on the case to get that implemented into the kernel.

I wouldn't personally believe anything until more people can actually try this stuff out and see what changes are really being done.

BlackMK4 posted:

Probably because everyone knows Linux already has compatibility issues with a lot of hardware. :v:

Considering how bulldozer is supposed to be made to improve server performance I'm sad that all/most the benchmarks so far have been for desktop apps on windows. Hopefully someone will get their hands on some opterons and start doing those tests.

Longinus00 fucked around with this message at 07:11 on Oct 16, 2011

Longinus00
Dec 29, 2005
Ur-Quan

Setzer Gabbiani posted:

Given all the bad press, I'm surprised the 8120 and 8150 are both sold out on Newegg

Remember the reason it was delayed in the first place? Yield issues.

Longinus00
Dec 29, 2005
Ur-Quan
Hey, monopoly markets are fine. Look at how much innovation is going on in the ISP/telecom industry, we keep getting more and more bandwidth and better prices amiright guys? Hell, IE6 was so good microsoft didn't even need to upgrade it for years and years.

Longinus00 fucked around with this message at 19:21 on Oct 20, 2011

Longinus00
Dec 29, 2005
Ur-Quan

trandorian posted:

But I did get my speeds upgraded this year out of the blue? Only "competition" my cable provider has here is 3 mbps DSL. AMD's been as effective a competitor against Intel as DSL has been against cable for at least the last 2 years, which is to say, not very.

And IE6 wasn't upgraded because Vista was supposed to be out in 2004 instead of 2006. Take a minute to remember back too, all the other browsers were crap compared to IE6 until Firefox finally went stable in 2004. IE 6 was the most standards compliant browser for several years even.

That's great news to all at&t and comcast/verizon customers that get faster speeds and lower bandwidth caps! It's also nice that IE6 was so standards compliant that when standards compliant browsers came about none of those sites worked with them, and the later IEs have an IE compliant mode (that is to say IE6 wasn't standards compliant it's just that sites were forced to work with it).

Longinus00
Dec 29, 2005
Ur-Quan

trandorian posted:

Why yes when browsers more standards complaint than IE6 came out they were more standards compliant than IE6. How insightful! You do realize that the Microsoft plan was to have IE releases tied to OSes and that XP SP2 happening screwed everything up and delayed Vista right? Microsoft actually halted development of the next os for a decent period of time to revamp XP with SP2. IE7 was due to come out at the same time roughly that browsers on par with IE6 were finally coming out.

Seriously, just because IE6 turned out to not have kept up with web standards for 10 years after its release doesn't mean that it wasn't the best at its time (2001) and for several years after. Hell, Bulldozer was supposed to launch in 2007 originally, right (not with the same name but the same concept)? AMD's suffering the same kind of problem MS did with IE, schedules go out of wack and all that.

I'm not sure you're getting it unless you were making a commentary on the differences between dejure and defacto standards. IE6 was purposely not standards compliant but because of it's market share all the sites had to code to its standards thus screwing other browsers with much smaller market share. AMD is also in a totally different position than microsoft because BD is not some vehicle with which it is going to change the cpu standards that other people are going to have to deal with later. If you only relate them by delays then I suppose BD is actually like half life 2 and all other products that suffer schedule delays.

Longinus00 fucked around with this message at 22:21 on Oct 20, 2011

Longinus00
Dec 29, 2005
Ur-Quan

trandorian posted:

IE6 was the most standards compliant browser when it was released, that's a fact and your Microsoft bashing doesn't change that. No browser was fully standards compliant before IE6 and in fact there's still none now that are compliant with everything. And there was none as compliant as IE6 was until many years after its release. Nor was "introduce proprietary support things" a Microsoft only thing, Netscape was especially bad with trying that, and adware Opera at the time had its own special things it supported.

AMD is in fact trying to make the case that processors should be designed like bulldozer, modules of two integer cores sharing cache and an FPU. And of course throughout their history Intel and AMD have each tried to introduce new instruction sets and convince people to use them (MMX, 3DNow!) etc. Just because they're failing doesn't make it not the case!

Really? Opera 6.0 which was released a month after IE6 and supports CSS2 is less standards compliant than IE6?

Longinus00
Dec 29, 2005
Ur-Quan
Intel is a huge company. They have fabs, ram (that's how they started out before they did processors), flash, cpus, and a whole bunch of miscellaneous stuff. It's not like they only do CPUs.

Imagine if Intel's fabs got split off Global Foundries style so other people could use their process.

Longinus00 fucked around with this message at 04:07 on Oct 21, 2011

Longinus00
Dec 29, 2005
Ur-Quan

Combat Pretzel posted:

I wouldn't be surprised if someone came up with a assembly level recompiler...After all, only the APIs need to be there, not the CPU architecture per se.

What do APIs have to do with assembly?

Longinus00
Dec 29, 2005
Ur-Quan
Looks like phoronix finally got around to benchmarking the 8150 (skip to page 6+). I wouldn't normally bring up such a trashy site but these are the first linux benchmarks I know about and it seems to do okay. Too bad about the crazy power draw.

Longinus00
Dec 29, 2005
Ur-Quan

Agreed posted:

Do they just not own a 2600K or what's the deal there?

Phoronix isn't big enough to get sent production samples or anything, same reason why the review is so late. I think he might have purchased this 8150 out of pocket so it doesn't surprise me he doesn't have a very comprehensive field to test against (notice the lack of hexcore k10).

Longinus00
Dec 29, 2005
Ur-Quan

Zhentar posted:

That article is a bit deceptive, because there's one thing it doesn't make clear... all of those scores are significantly lower than if they were just allowed to run with 8 threads in the first place.

The other thing that's not clear is to what extent those benchmarks are floating point. With FP operations, the modules really are a lot closer to simply a single core with Hyperthreading.

I think it's not trying to compare 2/4 thread vs. 8 thread, it's figuring out how to best schedule when there's not full core/module saturation. Like in games. If windows is trying to maximize idle cores/modules while in lightly threaded situations it could lead to lower performance. This might be what the windows 8 10% performance increase comes from.

Longinus00 fucked around with this message at 16:32 on Oct 28, 2011

Longinus00
Dec 29, 2005
Ur-Quan

Zhentar posted:

Yeah, I realize that's what they're intending to compare, but the article doesn't do a good job of conveying that; I was pointing it out because it would be easy for someone to walk away from that article with the wrong conclusion.

My other complain about not being clear about the floating-pointness is because that directly impacts how applicable it is to games and other desktop scenarios. If it's faster because it reduces contention on FP components, then it's meaningless for most desktop workloads. If it's faster because it reduces cache contention, or some other reason, then it's more likely to help other workloads.

It might help out even in non FP situations because BD shares decoders across a module. You might get better performance simply by being able to throw all of a modules decoders at one thread instead of two. What this is going to power consumption is a different matter.

Longinus00
Dec 29, 2005
Ur-Quan

Fuzzy Mammal posted:

Is there any news on the 28nm gpu lineup? Southern Islands is the chip family codename right? I haven't heard anything on them in months and thought they were supposed to be out by now? Granted it feels like the next round of nvidia boards are in the same boat.

I bet that if anything's holding them up it's yields on the new 28nm process.

Longinus00
Dec 29, 2005
Ur-Quan

This doesn't refute the claim that they're pulling out of the desktop x86 market.

Slider posted:

Is bulldozer actually any faster in games compared to the old phenom II chips? Newegg has the fx-4100 for 120 bucks and it doesn't seem like a bad deal if you buy a 212+ and overclock it. I know the heat/power consumption sucks, but a quad core 4.5ghz processor if you're on a budget doesn't seem that terrible to me.

Short answer: No.

Longinus00
Dec 29, 2005
Ur-Quan

Daeno posted:

Supposed 7000 series pricing.

Well...on 2 cards at least.

I wonder how much of this is due to the terrible yields TSMC is giving them vs. abandoning the whole chip philosophy they started with the 4800 series.

Longinus00
Dec 29, 2005
Ur-Quan

Shaocaholica posted:

They could have used log scale.

Why would they do that?

Longinus00
Dec 29, 2005
Ur-Quan
Considering the process issues it's not surprising they don't have enough supplies. It's a repeat of last year, and the year before.

Longinus00
Dec 29, 2005
Ur-Quan
How is this even news? NVidia started it before ATI and they've both been doing it ever since. You guys going to rage out every year when it happens again?

Longinus00
Dec 29, 2005
Ur-Quan
When's the last time ATI/AMD has been "on top" anyway? The only thing that would be surprising is if Nvidia was able to price match ATI/AMDs top card.

Longinus00
Dec 29, 2005
Ur-Quan
I was talking about in a single chip solution. Crossfire/SLI brings its own problems (driver support required for it to work in games, misc. issues in games, micro stutter when in 2x mode, etc.).

Longinus00
Dec 29, 2005
Ur-Quan
How did my comment about the fact that a new Nvidia card being faster than the 7970 isn't a big deal and is just maintaining status quo turn into this?

I am mostly agreed about ATIs driver deficiencies however, especially on the linux side but that's another can of worms.

Longinus00
Dec 29, 2005
Ur-Quan
Nvidia's started the trash talking already so I guess that must mean there's a sizable number of early adopters jumping ship to 7970. Hopefully by the time the next Fermi comes out (article says March-April timeline) AMDs yield will be good enough for them to start lowering prices.

Longinus00
Dec 29, 2005
Ur-Quan
In other news:

Nvidia (stop me if you've heard this before) says Kepler is going to be pricey because of yield issues. Get ready to consider the 7970s price a "bargain".

Intel, having no direct CPU competition, decides that it can sit on sandy bridge until inventory sells out. Nobody could ever have seen this coming, nobody.

Longinus00
Dec 29, 2005
Ur-Quan

Alereon posted:

All indications seem to be that it's a Trinity APU, two Piledriver modules (four cores) and a VLIW4 GPU.

If this is true then maybe it might work out great for AMD as game developers try to squeeze out the maximum performance from AMD's somewhat peculiar module design. Optimizations learned there might be applicable to more general programs or compilers targeting bulldozer-esqe designs.

Longinus00
Dec 29, 2005
Ur-Quan
My guiding principle for hardware these days is "don't buy any individual component that's over $200". Obviously the philosophy isn't for everybody but it does mean that for the price of someone else's video card you can get a fast enough full working system (especially if you reuse parts in an upgrade).

Longinus00 fucked around with this message at 06:13 on Apr 30, 2012

Longinus00
Dec 29, 2005
Ur-Quan

grumperfish posted:

I usually spend around $250 for a videocard, and haven't ever really been disappointed with performance. I don't have extreme requirements, but that usually puts me well in the mid-range with power to spare, and in two years I just grab another ~$250 card to move up. This worked out particularly-great with the 4870 and the unlocked 6950 I'm running now, as I can very-nearly max everything out (at 1680x1050) and overclocking fills the gaps when I want to run stupid-high settings with something like The Witcher II or Metro 2033. $500+ videocards have their places for certain people, but I'd rather trade off maximum performance for being able to continually-receive "good enough" performance without having to turn many settings down. I don't think I'd handle moving to a 5770-6850 very well, as the inconvenience of having to tailor settings would outweigh (for me) the reduced cost vs. a more powerful card.

If you're willing to put up with rebates, count the cost of an included game as part of a "discount", and not buy when just released then a 6950 just squeezes in as a $200 card. I ended up going for a 6870 because you could get them for around $150 after rebate and it basically doubled the performance of my old 4850. I'm actually surprised that I can run 60fps @1080p for many new titles even without any overclocking.

Longinus00
Dec 29, 2005
Ur-Quan

Civil posted:

I'd spend good money if AMD (or nv) could produce a video card that performed at mid-range levels, but didn't require an aux. power source or a massive cooling unit. I'm currently rocking a HD5450 because my wife and I wanted a quiet PC, and it does just fine pushing dual 1920 monitors. The last gam3r card I had in there (4850) sounded like a hairdryer.

Are the days of passively cooled video cards gone?

The 4800 series of cards were notoriously hot. The 4870 would be getting close to 100C in games. The newer generation of cards run a bit cooler and cooling has improved a bit since then. All performance geared mid range cards come in multi fan cooling solutions which lowers the noise even further. If even that is not enough then you can try the ridiculous passive heatsink cards of the even larger aftermarket passive heatsinks.

Longinus00
Dec 29, 2005
Ur-Quan

Civil posted:

While that card is passively cooled, it still requires additional power, and has the case heating issues that go along with that. I was hoping AMD would solve the problem at the chipset level, rather than an OEM solution that takes 3 slots because the heatsink is so massive.

This is the card I'm currently using. Remember when just about every video card looked like this? I'd like to see mid-range performance in a package this size. Pipe dream?
http://www.newegg.com/Product/Product.aspx?Item=N82E16814131339

The reason midrange cards require additional power is because it takes all that additional power to reach mid range performance. You may as well complain about how new mid range CPUs require extra 12V headers on the motherboard. There's nothing you can currently change about the "chipset", whatever you mean by that, to fix it. Now if you mean you want a card as fast as mid range performance of X years ago then you're in luck.

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan

sh1gman posted:

Well as far as overclocking, the 6870 is literally almost impossible to overclock, either I got a bad chip, or something, because a mere 5-10mhz bump on core causes artifacts in 3DMark Vantage, and same deal for UniEngine Heaven(I have since read reviews on several tech sites and they all come to the same conclusion that the 6870 just has 0 overclock headroom).
Maybe future-proof was the wrong word to use, as I am aware I will have to upgrade to keep up, but I keep hearing bad poo poo about going with dual-card setups. I currently run a single 1920x1200 display and I get respectable FPS (between 45 and 60) in Skyrim/BF3/Other New poo poo, but was hoping to go to EyeFinity with 5760x1200 resolution. Does anyone know how much GPU horsepower would be required to facilitate this? Is it even possible on a single card except with the insanely expensive (ha) 7970?

E: It seems that the limiting factor in EyeFinity is not the processing power of the GPU, but the amount of RAM onboard, looks like the 6870 is plenty powerful enough, if it is tied to 2GB of RAM, so I just have to look for a replacement pair of those or a 7870 2GB edition. :cripes: RAM is expensive on video cards though.

How badly do you want that extra performance? Upgrading after just one generation is usually not a worthwhile investment, especially since the 7800 series is so much more expensive than the 6800 series. If you really want to move up and 'future proof' then you pretty much might as well go all out and splurge one of the 7900 series. I happen to have a 6870 and I have no problem OCing it to the factory OC levels that manufactures ship the more expensive cards with. I also have no problems getting 60fps @1080p in skyrim but I don't play it in the highest texture level that was patched in.

  • Locked thread